Jan 30 16:56:28 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 16:56:28 crc restorecon[4695]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:28 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:56:29 crc restorecon[4695]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:56:29 crc restorecon[4695]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 16:56:29 crc kubenswrapper[4875]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:56:29 crc kubenswrapper[4875]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 16:56:29 crc kubenswrapper[4875]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:56:29 crc kubenswrapper[4875]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:56:29 crc kubenswrapper[4875]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 16:56:29 crc kubenswrapper[4875]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.906508 4875 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910143 4875 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910160 4875 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910165 4875 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910169 4875 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910173 4875 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910177 4875 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910185 4875 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910189 4875 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910193 4875 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910197 4875 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910201 4875 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910205 4875 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910211 4875 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910216 4875 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910221 4875 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910226 4875 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910231 4875 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910236 4875 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910240 4875 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910244 4875 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910248 4875 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910252 4875 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910255 4875 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910259 4875 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910263 4875 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910266 4875 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910271 4875 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910275 4875 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910278 4875 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910282 4875 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910286 4875 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910290 4875 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910294 4875 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910299 4875 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910304 4875 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910309 4875 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910312 4875 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910317 4875 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910322 4875 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910327 4875 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910331 4875 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910335 4875 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910338 4875 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910342 4875 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910346 4875 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910350 4875 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910354 4875 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910358 4875 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910362 4875 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910366 4875 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910370 4875 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910375 4875 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910378 4875 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910382 4875 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910385 4875 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910389 4875 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910395 4875 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910400 4875 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910403 4875 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910408 4875 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910411 4875 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910416 4875 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910421 4875 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910425 4875 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910429 4875 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910433 4875 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910437 4875 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910441 4875 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910444 4875 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910448 4875 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.910451 4875 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911235 4875 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911249 4875 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911276 4875 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911282 4875 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911289 4875 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911293 4875 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911300 4875 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911307 4875 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911311 4875 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911316 4875 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911322 4875 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911327 4875 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911331 4875 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911336 4875 flags.go:64] FLAG: --cgroup-root="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911340 4875 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911344 4875 flags.go:64] FLAG: --client-ca-file="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911349 4875 flags.go:64] FLAG: --cloud-config="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911353 4875 flags.go:64] FLAG: --cloud-provider="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911357 4875 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911362 4875 flags.go:64] FLAG: --cluster-domain="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911367 4875 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911371 4875 flags.go:64] FLAG: --config-dir="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911376 4875 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911381 4875 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911387 4875 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911392 4875 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911397 4875 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911402 4875 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911407 4875 flags.go:64] FLAG: --contention-profiling="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911411 4875 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911415 4875 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911419 4875 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911423 4875 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911429 4875 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911433 4875 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911437 4875 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911441 4875 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911445 4875 flags.go:64] FLAG: --enable-server="true" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911449 4875 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911455 4875 flags.go:64] FLAG: --event-burst="100" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911461 4875 flags.go:64] FLAG: --event-qps="50" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911468 4875 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911473 4875 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911478 4875 flags.go:64] FLAG: --eviction-hard="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911484 4875 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911488 4875 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911493 4875 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911498 4875 flags.go:64] FLAG: --eviction-soft="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911502 4875 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911506 4875 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911511 4875 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911515 4875 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911519 4875 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911524 4875 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911528 4875 flags.go:64] FLAG: --feature-gates="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911533 4875 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911538 4875 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911542 4875 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911546 4875 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911550 4875 flags.go:64] FLAG: --healthz-port="10248" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911554 4875 flags.go:64] FLAG: --help="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911558 4875 flags.go:64] FLAG: --hostname-override="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911563 4875 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911571 4875 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911575 4875 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911591 4875 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911596 4875 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911600 4875 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911604 4875 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911608 4875 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911612 4875 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911616 4875 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911621 4875 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911626 4875 flags.go:64] FLAG: --kube-reserved="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911630 4875 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911635 4875 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911639 4875 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911643 4875 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911647 4875 flags.go:64] FLAG: --lock-file="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911651 4875 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911655 4875 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911660 4875 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911667 4875 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911671 4875 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911676 4875 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911680 4875 flags.go:64] FLAG: --logging-format="text" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911684 4875 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911689 4875 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911693 4875 flags.go:64] FLAG: --manifest-url="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911697 4875 flags.go:64] FLAG: --manifest-url-header="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911703 4875 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911708 4875 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911714 4875 flags.go:64] FLAG: --max-pods="110" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911718 4875 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911722 4875 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911727 4875 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911731 4875 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911735 4875 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911739 4875 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911744 4875 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911755 4875 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911759 4875 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911764 4875 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911768 4875 flags.go:64] FLAG: --pod-cidr="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911773 4875 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911779 4875 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911784 4875 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911789 4875 flags.go:64] FLAG: --pods-per-core="0" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911793 4875 flags.go:64] FLAG: --port="10250" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911798 4875 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911802 4875 flags.go:64] FLAG: --provider-id="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911806 4875 flags.go:64] FLAG: --qos-reserved="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911811 4875 flags.go:64] FLAG: --read-only-port="10255" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911815 4875 flags.go:64] FLAG: --register-node="true" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911820 4875 flags.go:64] FLAG: --register-schedulable="true" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911824 4875 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911834 4875 flags.go:64] FLAG: --registry-burst="10" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911839 4875 flags.go:64] FLAG: --registry-qps="5" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911843 4875 flags.go:64] FLAG: --reserved-cpus="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911847 4875 flags.go:64] FLAG: --reserved-memory="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911853 4875 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911857 4875 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911862 4875 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911866 4875 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911870 4875 flags.go:64] FLAG: --runonce="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911874 4875 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911879 4875 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911883 4875 flags.go:64] FLAG: --seccomp-default="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911887 4875 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911891 4875 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911896 4875 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911900 4875 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911905 4875 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911909 4875 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911913 4875 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911918 4875 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911922 4875 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911927 4875 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911932 4875 flags.go:64] FLAG: --system-cgroups="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911937 4875 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911943 4875 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911947 4875 flags.go:64] FLAG: --tls-cert-file="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911952 4875 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911957 4875 flags.go:64] FLAG: --tls-min-version="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911961 4875 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911965 4875 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911969 4875 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911974 4875 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911978 4875 flags.go:64] FLAG: --v="2" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911985 4875 flags.go:64] FLAG: --version="false" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911991 4875 flags.go:64] FLAG: --vmodule="" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.911996 4875 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.912001 4875 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912138 4875 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912143 4875 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912148 4875 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912153 4875 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912158 4875 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912162 4875 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912166 4875 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912170 4875 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912174 4875 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912178 4875 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912182 4875 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912186 4875 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912190 4875 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912194 4875 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912197 4875 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912203 4875 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912208 4875 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912220 4875 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912226 4875 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912230 4875 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912234 4875 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912238 4875 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912242 4875 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912246 4875 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912250 4875 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912254 4875 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912258 4875 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912262 4875 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912267 4875 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912271 4875 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912275 4875 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912278 4875 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912282 4875 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912285 4875 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912289 4875 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912293 4875 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912296 4875 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912300 4875 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912304 4875 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912308 4875 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912312 4875 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912315 4875 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912319 4875 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912322 4875 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912326 4875 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912329 4875 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912333 4875 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912337 4875 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912341 4875 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912347 4875 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912351 4875 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912355 4875 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912359 4875 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912363 4875 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912367 4875 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912371 4875 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912375 4875 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912379 4875 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912383 4875 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912387 4875 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912391 4875 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912395 4875 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912399 4875 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912402 4875 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912406 4875 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912410 4875 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912414 4875 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912418 4875 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912421 4875 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912425 4875 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.912429 4875 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.912435 4875 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.924385 4875 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.924424 4875 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924515 4875 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924524 4875 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924530 4875 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924535 4875 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924539 4875 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924543 4875 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924547 4875 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924551 4875 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924556 4875 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924559 4875 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924563 4875 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924568 4875 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924574 4875 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924578 4875 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924611 4875 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924616 4875 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924619 4875 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924623 4875 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924627 4875 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924630 4875 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924634 4875 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924637 4875 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924641 4875 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924644 4875 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924648 4875 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924651 4875 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924655 4875 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924659 4875 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924662 4875 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924666 4875 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924670 4875 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924673 4875 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924679 4875 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924682 4875 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924686 4875 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924690 4875 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924694 4875 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924697 4875 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924702 4875 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924708 4875 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924714 4875 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924718 4875 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924722 4875 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924725 4875 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924729 4875 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924732 4875 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924736 4875 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924740 4875 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924743 4875 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924747 4875 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924752 4875 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924757 4875 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924761 4875 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924765 4875 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924769 4875 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924772 4875 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924776 4875 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924779 4875 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924783 4875 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924787 4875 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924791 4875 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924795 4875 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924799 4875 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924803 4875 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924806 4875 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924810 4875 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924813 4875 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924848 4875 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924853 4875 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924857 4875 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.924861 4875 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.924868 4875 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925010 4875 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925022 4875 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925026 4875 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925031 4875 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925036 4875 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925040 4875 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925044 4875 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925048 4875 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925051 4875 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925056 4875 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925060 4875 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925067 4875 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925071 4875 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925075 4875 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925079 4875 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925083 4875 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925086 4875 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925091 4875 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925095 4875 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925099 4875 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925103 4875 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925107 4875 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925110 4875 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925114 4875 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925118 4875 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925121 4875 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925125 4875 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925128 4875 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925131 4875 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925135 4875 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925139 4875 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925143 4875 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925148 4875 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925152 4875 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925156 4875 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925160 4875 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925163 4875 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925168 4875 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925172 4875 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925176 4875 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925179 4875 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925183 4875 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925186 4875 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925190 4875 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925193 4875 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925197 4875 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925200 4875 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925204 4875 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925208 4875 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925213 4875 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925216 4875 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925221 4875 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925224 4875 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925228 4875 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925232 4875 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925236 4875 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925240 4875 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925244 4875 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925249 4875 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925253 4875 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925256 4875 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925260 4875 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925264 4875 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925267 4875 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925271 4875 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925275 4875 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925281 4875 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925284 4875 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925288 4875 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925291 4875 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:56:29 crc kubenswrapper[4875]: W0130 16:56:29.925294 4875 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.925301 4875 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.926795 4875 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.931037 4875 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.931130 4875 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.932640 4875 server.go:997] "Starting client certificate rotation" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.932667 4875 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.932957 4875 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-21 22:11:24.539920168 +0000 UTC Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.933162 4875 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.955786 4875 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.957814 4875 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 16:56:29 crc kubenswrapper[4875]: E0130 16:56:29.959100 4875 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:56:29 crc kubenswrapper[4875]: I0130 16:56:29.981598 4875 log.go:25] "Validated CRI v1 runtime API" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.022907 4875 log.go:25] "Validated CRI v1 image API" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.025163 4875 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.029467 4875 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-16-51-39-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.029520 4875 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.048326 4875 manager.go:217] Machine: {Timestamp:2026-01-30 16:56:30.044641227 +0000 UTC m=+0.592004620 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:1622a68f-c9e8-4b6d-b2e7-c5e881732b1e BootID:58694c46-6e56-4811-9d59-25ba706e9ec3 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:aa:c6:89 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:aa:c6:89 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:47:2e:21 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ca:41:7e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:60:47:34 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e0:07:97 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ee:59:0b:5a:1e:b9 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:02:b8:1a:d2:84:39 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.049708 4875 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.050079 4875 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.052796 4875 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.053154 4875 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.053211 4875 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.053554 4875 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.053573 4875 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.054521 4875 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.054574 4875 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.055022 4875 state_mem.go:36] "Initialized new in-memory state store" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.055562 4875 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.059048 4875 kubelet.go:418] "Attempting to sync node with API server" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.059085 4875 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.059139 4875 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.059181 4875 kubelet.go:324] "Adding apiserver pod source" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.059207 4875 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.064088 4875 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 16:56:30 crc kubenswrapper[4875]: W0130 16:56:30.065018 4875 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.065297 4875 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:56:30 crc kubenswrapper[4875]: W0130 16:56:30.065242 4875 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.065725 4875 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.065393 4875 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.068640 4875 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.070567 4875 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.070783 4875 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.070957 4875 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.071073 4875 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.071200 4875 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.071309 4875 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.071431 4875 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.071559 4875 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.071762 4875 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.071895 4875 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.072015 4875 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.072123 4875 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.072281 4875 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.073150 4875 server.go:1280] "Started kubelet" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.073778 4875 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.073813 4875 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.073986 4875 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.075179 4875 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 16:56:30 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.076812 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.077001 4875 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.077260 4875 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.077287 4875 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.077386 4875 server.go:460] "Adding debug handlers to kubelet server" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.078069 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:19:22.419789062 +0000 UTC Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.078172 4875 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 16:56:30 crc kubenswrapper[4875]: W0130 16:56:30.078984 4875 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.079084 4875 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.078446 4875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="200ms" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.079741 4875 factory.go:55] Registering systemd factory Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.079782 4875 factory.go:221] Registration of the systemd container factory successfully Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.077683 4875 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.080337 4875 factory.go:153] Registering CRI-O factory Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.080485 4875 factory.go:221] Registration of the crio container factory successfully Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.080709 4875 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.080865 4875 factory.go:103] Registering Raw factory Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.081000 4875 manager.go:1196] Started watching for new ooms in manager Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.086916 4875 manager.go:319] Starting recovery of all containers Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.086642 4875 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.65:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f909ebf915c65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:56:30.073101413 +0000 UTC m=+0.620464826,LastTimestamp:2026-01-30 16:56:30.073101413 +0000 UTC m=+0.620464826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095291 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095377 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095394 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095408 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095419 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095431 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095444 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095454 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095472 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095483 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095496 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095507 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095521 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095537 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095547 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095559 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095571 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095605 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095619 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095637 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095653 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095666 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095678 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095695 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095712 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095734 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095750 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095763 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095781 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095792 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095805 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095818 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095829 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095840 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095851 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095863 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095881 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095893 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095904 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095947 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095964 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095978 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.095990 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096032 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096044 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096055 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096067 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096079 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096091 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096103 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096114 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096130 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096146 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096159 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096171 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096188 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096200 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096210 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096220 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096232 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096246 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096258 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096270 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096281 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096291 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096303 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096314 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096325 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096336 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096348 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096363 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096374 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096441 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096478 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096489 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096501 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096511 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096523 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096533 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096545 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096557 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096603 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096617 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096633 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096646 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096657 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096669 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096681 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096697 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096711 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096725 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096737 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096749 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096760 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096773 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.096783 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098193 4875 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098216 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098232 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098245 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098256 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098267 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098280 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098291 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098306 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098324 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098335 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098346 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098358 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098369 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098382 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098395 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098414 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098431 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098446 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098477 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098491 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098503 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098516 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098528 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098540 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098552 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098564 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098575 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098610 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098621 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098633 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098645 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098657 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098668 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098680 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098692 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098704 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098715 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098728 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098739 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098753 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098765 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098776 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098787 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098798 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098810 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098821 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098832 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098874 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098888 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098899 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098911 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098920 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098931 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098942 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098956 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098969 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098981 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.098991 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099002 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099012 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099023 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099034 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099045 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099056 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099067 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099079 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099090 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099100 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099113 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099125 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099137 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099149 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099162 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099172 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099183 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099195 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099205 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099217 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099227 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099238 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099248 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099260 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099272 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099285 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099297 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099308 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099322 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099336 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099346 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099358 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099367 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099378 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099389 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099402 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099411 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099421 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099433 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099445 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099457 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099467 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099477 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099488 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099497 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099509 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099519 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099528 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099537 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099547 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099557 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099567 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099577 4875 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099600 4875 reconstruct.go:97] "Volume reconstruction finished" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.099609 4875 reconciler.go:26] "Reconciler: start to sync state" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.108233 4875 manager.go:324] Recovery completed Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.120388 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.123265 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.123334 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.123346 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.124785 4875 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.124829 4875 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.124872 4875 state_mem.go:36] "Initialized new in-memory state store" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.132470 4875 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.134432 4875 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.134535 4875 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.134728 4875 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.134837 4875 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 16:56:30 crc kubenswrapper[4875]: W0130 16:56:30.135146 4875 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.135194 4875 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.140525 4875 policy_none.go:49] "None policy: Start" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.141807 4875 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.141838 4875 state_mem.go:35] "Initializing new in-memory state store" Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.180747 4875 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 16:56:30 crc kubenswrapper[4875]: W0130 16:56:30.189818 4875 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.206750 4875 manager.go:334] "Starting Device Plugin manager" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.206806 4875 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.206824 4875 server.go:79] "Starting device plugin registration server" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.207345 4875 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.207366 4875 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.207565 4875 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.207696 4875 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.207704 4875 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.219624 4875 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.235800 4875 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.235920 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.237147 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.237203 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.237217 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.237433 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.237918 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.237994 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.238534 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.238581 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.238619 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.238797 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.238902 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.238960 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.239271 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.239309 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.239324 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.240268 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.240307 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.240322 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.241360 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.241386 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.241396 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.241561 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.241649 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.241689 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.242295 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.242328 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.242343 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.242522 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.242782 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.242852 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.242855 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.242881 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.242890 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.243380 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.243438 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.243454 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.243809 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.243867 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.244233 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.244274 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.244288 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.245026 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.245064 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.245077 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.281116 4875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="400ms" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.301555 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.301596 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.301617 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.301634 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.301652 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.301668 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.301781 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.301847 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.301867 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.301882 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.301897 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.301917 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.301977 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.302037 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.302187 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.307621 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.308979 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.309039 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.309059 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.309099 4875 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.309828 4875 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.65:6443: connect: connection refused" node="crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.403647 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.403746 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.403799 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.403827 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.403899 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.403925 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.403967 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.403956 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404043 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404075 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404067 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.403992 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404131 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404167 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404174 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404247 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404283 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404338 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404398 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404447 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404500 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404445 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404453 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404575 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404500 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404665 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404670 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404738 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404799 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.404946 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.510710 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.512662 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.512741 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.512757 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.512789 4875 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.513424 4875 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.65:6443: connect: connection refused" node="crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.585970 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.611271 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.624096 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: W0130 16:56:30.633337 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-88b98a794599b4889409ae3c94124d1823225329f0ba692400e7f8b5d0208fb0 WatchSource:0}: Error finding container 88b98a794599b4889409ae3c94124d1823225329f0ba692400e7f8b5d0208fb0: Status 404 returned error can't find the container with id 88b98a794599b4889409ae3c94124d1823225329f0ba692400e7f8b5d0208fb0 Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.650543 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.658370 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:30 crc kubenswrapper[4875]: W0130 16:56:30.671051 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6387dbbd651ff1382720bc65bce55fd45203acfa921ff0f4a8364200b13d2fc5 WatchSource:0}: Error finding container 6387dbbd651ff1382720bc65bce55fd45203acfa921ff0f4a8364200b13d2fc5: Status 404 returned error can't find the container with id 6387dbbd651ff1382720bc65bce55fd45203acfa921ff0f4a8364200b13d2fc5 Jan 30 16:56:30 crc kubenswrapper[4875]: W0130 16:56:30.674521 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-66f411981d10504535dd602c56efa629f777d2ff4913db9a53858d415e8fbc38 WatchSource:0}: Error finding container 66f411981d10504535dd602c56efa629f777d2ff4913db9a53858d415e8fbc38: Status 404 returned error can't find the container with id 66f411981d10504535dd602c56efa629f777d2ff4913db9a53858d415e8fbc38 Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.682632 4875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="800ms" Jan 30 16:56:30 crc kubenswrapper[4875]: W0130 16:56:30.876706 4875 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.876831 4875 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.914090 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.915816 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.915858 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.915870 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:30 crc kubenswrapper[4875]: I0130 16:56:30.915899 4875 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:56:30 crc kubenswrapper[4875]: E0130 16:56:30.916397 4875 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.65:6443: connect: connection refused" node="crc" Jan 30 16:56:31 crc kubenswrapper[4875]: I0130 16:56:31.075897 4875 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:31 crc kubenswrapper[4875]: I0130 16:56:31.078969 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:09:52.708745983 +0000 UTC Jan 30 16:56:31 crc kubenswrapper[4875]: I0130 16:56:31.140566 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6387dbbd651ff1382720bc65bce55fd45203acfa921ff0f4a8364200b13d2fc5"} Jan 30 16:56:31 crc kubenswrapper[4875]: I0130 16:56:31.141809 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9fcfa0e4af3463f02564f89ec4ab26691f6f755e25c20fd65e455cc8b1e59f4e"} Jan 30 16:56:31 crc kubenswrapper[4875]: I0130 16:56:31.142976 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"80ecf4059262061c31cb02ccc2bc05c117cc0f08434c41f18d288b5e40c399b5"} Jan 30 16:56:31 crc kubenswrapper[4875]: I0130 16:56:31.144077 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"88b98a794599b4889409ae3c94124d1823225329f0ba692400e7f8b5d0208fb0"} Jan 30 16:56:31 crc kubenswrapper[4875]: I0130 16:56:31.144951 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"66f411981d10504535dd602c56efa629f777d2ff4913db9a53858d415e8fbc38"} Jan 30 16:56:31 crc kubenswrapper[4875]: E0130 16:56:31.393157 4875 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.65:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f909ebf915c65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:56:30.073101413 +0000 UTC m=+0.620464826,LastTimestamp:2026-01-30 16:56:30.073101413 +0000 UTC m=+0.620464826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:56:31 crc kubenswrapper[4875]: W0130 16:56:31.462157 4875 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:31 crc kubenswrapper[4875]: E0130 16:56:31.462236 4875 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:56:31 crc kubenswrapper[4875]: E0130 16:56:31.483427 4875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="1.6s" Jan 30 16:56:31 crc kubenswrapper[4875]: W0130 16:56:31.575608 4875 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:31 crc kubenswrapper[4875]: E0130 16:56:31.575752 4875 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:56:31 crc kubenswrapper[4875]: W0130 16:56:31.658320 4875 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:31 crc kubenswrapper[4875]: E0130 16:56:31.658430 4875 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:56:31 crc kubenswrapper[4875]: I0130 16:56:31.716889 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:31 crc kubenswrapper[4875]: I0130 16:56:31.719056 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:31 crc kubenswrapper[4875]: I0130 16:56:31.719122 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:31 crc kubenswrapper[4875]: I0130 16:56:31.719135 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:31 crc kubenswrapper[4875]: I0130 16:56:31.719171 4875 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:56:31 crc kubenswrapper[4875]: E0130 16:56:31.719878 4875 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.65:6443: connect: connection refused" node="crc" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.075287 4875 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.079485 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:36:31.886038415 +0000 UTC Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.136370 4875 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 16:56:32 crc kubenswrapper[4875]: E0130 16:56:32.137981 4875 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.150103 4875 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f8898eafcfe22a7ee768bab7d5557199f7e90f22053ffaea0d39edf906c69889" exitCode=0 Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.150167 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f8898eafcfe22a7ee768bab7d5557199f7e90f22053ffaea0d39edf906c69889"} Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.150237 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.151659 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.151705 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.151721 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.152716 4875 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8" exitCode=0 Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.152780 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8"} Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.152826 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.153732 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.153757 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.153771 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.157323 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3"} Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.157359 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0"} Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.157374 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d"} Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.157387 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.157386 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92"} Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.161355 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.161414 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.161428 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.162513 4875 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923" exitCode=0 Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.162689 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923"} Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.162739 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.163942 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.163975 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.163987 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.165488 4875 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d" exitCode=0 Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.165546 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d"} Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.165609 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.166205 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.166743 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.167246 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.167271 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.167830 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.167862 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:32 crc kubenswrapper[4875]: I0130 16:56:32.167875 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:32 crc kubenswrapper[4875]: W0130 16:56:32.576770 4875 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:32 crc kubenswrapper[4875]: E0130 16:56:32.576865 4875 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.075259 4875 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.080043 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 08:18:25.237391277 +0000 UTC Jan 30 16:56:33 crc kubenswrapper[4875]: E0130 16:56:33.084938 4875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="3.2s" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.172964 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"41e792bd5d0c930c7e45a3b73fdd1c146e50f7d686f9b7ded43e66de3547804b"} Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.173017 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d9c9696f430b3b9f427ae6573b228d01d9296814e8983dd48ade9374ab323d72"} Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.173029 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3602451d315d0555abce0fd45866f7191ef2b169be6a2b71df9b206844d1eaa8"} Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.173116 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.174244 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.174268 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.174280 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.176813 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e"} Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.176850 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666"} Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.176864 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b"} Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.176878 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95"} Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.178887 4875 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99" exitCode=0 Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.178949 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99"} Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.179138 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.180425 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.180470 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.180485 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.181438 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c0fc6c88a382e130d540ed1bbf460e3d8de5f41d159555c7e8040b2816b320f6"} Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.181471 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.181474 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.182372 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.182399 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.182411 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.182433 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.182447 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.182455 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.320497 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.322205 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.322433 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.322454 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.322498 4875 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:56:33 crc kubenswrapper[4875]: E0130 16:56:33.323313 4875 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.65:6443: connect: connection refused" node="crc" Jan 30 16:56:33 crc kubenswrapper[4875]: W0130 16:56:33.631457 4875 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:33 crc kubenswrapper[4875]: E0130 16:56:33.631570 4875 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.871851 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:33 crc kubenswrapper[4875]: I0130 16:56:33.881531 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:33 crc kubenswrapper[4875]: W0130 16:56:33.951671 4875 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.65:6443: connect: connection refused Jan 30 16:56:33 crc kubenswrapper[4875]: E0130 16:56:33.951837 4875 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.65:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.080526 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:52:52.29761614 +0000 UTC Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.187430 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d"} Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.187577 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.188460 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.188487 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.188499 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.191292 4875 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86" exitCode=0 Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.191380 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86"} Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.191509 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.191523 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.191509 4875 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.191644 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.191507 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.192893 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.192929 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.192947 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.192962 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.192989 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.193000 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.193191 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.193204 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.193212 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.193270 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.193281 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:34 crc kubenswrapper[4875]: I0130 16:56:34.193290 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.081615 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:36:52.025549828 +0000 UTC Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.174140 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.198828 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf"} Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.198873 4875 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.198900 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b"} Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.198924 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e"} Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.198945 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce"} Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.198924 4875 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.198963 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5"} Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.198930 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.198988 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.198950 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.202803 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.202856 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.202869 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.202883 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.202897 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.202936 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.203118 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.203140 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.203154 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.907828 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.908081 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.909703 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.909757 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:35 crc kubenswrapper[4875]: I0130 16:56:35.909767 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.082397 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:13:32.74727997 +0000 UTC Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.201292 4875 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.201326 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.201346 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.202780 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.202844 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.202859 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.202862 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.202911 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.202922 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.469542 4875 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.523906 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.525781 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.525864 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.525887 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:36 crc kubenswrapper[4875]: I0130 16:56:36.525926 4875 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.083572 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:49:10.325236679 +0000 UTC Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.119949 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.120139 4875 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.120184 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.121769 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.121819 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.121834 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.873214 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.873552 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.875284 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.875347 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.875359 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.963316 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.963668 4875 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.963743 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.965701 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.965763 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:37 crc kubenswrapper[4875]: I0130 16:56:37.965785 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:38 crc kubenswrapper[4875]: I0130 16:56:38.085257 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:29:12.958831997 +0000 UTC Jan 30 16:56:38 crc kubenswrapper[4875]: I0130 16:56:38.604411 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:38 crc kubenswrapper[4875]: I0130 16:56:38.604678 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:38 crc kubenswrapper[4875]: I0130 16:56:38.606336 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:38 crc kubenswrapper[4875]: I0130 16:56:38.606404 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:38 crc kubenswrapper[4875]: I0130 16:56:38.606417 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:39 crc kubenswrapper[4875]: I0130 16:56:39.086076 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 19:01:52.700714069 +0000 UTC Jan 30 16:56:39 crc kubenswrapper[4875]: I0130 16:56:39.101266 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:39 crc kubenswrapper[4875]: I0130 16:56:39.101454 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:39 crc kubenswrapper[4875]: I0130 16:56:39.102960 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:39 crc kubenswrapper[4875]: I0130 16:56:39.103051 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:39 crc kubenswrapper[4875]: I0130 16:56:39.103121 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:39 crc kubenswrapper[4875]: I0130 16:56:39.739836 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:39 crc kubenswrapper[4875]: I0130 16:56:39.740080 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:39 crc kubenswrapper[4875]: I0130 16:56:39.742012 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:39 crc kubenswrapper[4875]: I0130 16:56:39.742074 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:39 crc kubenswrapper[4875]: I0130 16:56:39.742092 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:40 crc kubenswrapper[4875]: I0130 16:56:40.086511 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:13:42.478858623 +0000 UTC Jan 30 16:56:40 crc kubenswrapper[4875]: E0130 16:56:40.219910 4875 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 16:56:41 crc kubenswrapper[4875]: I0130 16:56:41.087774 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 09:32:05.522958138 +0000 UTC Jan 30 16:56:42 crc kubenswrapper[4875]: I0130 16:56:42.087991 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:16:00.88938532 +0000 UTC Jan 30 16:56:42 crc kubenswrapper[4875]: I0130 16:56:42.740675 4875 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 16:56:42 crc kubenswrapper[4875]: I0130 16:56:42.741060 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 16:56:43 crc kubenswrapper[4875]: I0130 16:56:43.088608 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:46:45.591544311 +0000 UTC Jan 30 16:56:43 crc kubenswrapper[4875]: I0130 16:56:43.547951 4875 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 16:56:43 crc kubenswrapper[4875]: I0130 16:56:43.548043 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 16:56:43 crc kubenswrapper[4875]: I0130 16:56:43.877737 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 16:56:43 crc kubenswrapper[4875]: I0130 16:56:43.877944 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:43 crc kubenswrapper[4875]: I0130 16:56:43.879321 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:43 crc kubenswrapper[4875]: I0130 16:56:43.879345 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:43 crc kubenswrapper[4875]: I0130 16:56:43.879355 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:44 crc kubenswrapper[4875]: I0130 16:56:44.075736 4875 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 16:56:44 crc kubenswrapper[4875]: I0130 16:56:44.089297 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 11:02:25.148116024 +0000 UTC Jan 30 16:56:45 crc kubenswrapper[4875]: I0130 16:56:45.016396 4875 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 16:56:45 crc kubenswrapper[4875]: I0130 16:56:45.016506 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 16:56:45 crc kubenswrapper[4875]: I0130 16:56:45.022550 4875 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 16:56:45 crc kubenswrapper[4875]: I0130 16:56:45.022669 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 16:56:45 crc kubenswrapper[4875]: I0130 16:56:45.089769 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:43:53.108665797 +0000 UTC Jan 30 16:56:46 crc kubenswrapper[4875]: I0130 16:56:46.090627 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:41:07.273317471 +0000 UTC Jan 30 16:56:47 crc kubenswrapper[4875]: I0130 16:56:47.091663 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:41:46.09611045 +0000 UTC Jan 30 16:56:47 crc kubenswrapper[4875]: I0130 16:56:47.967550 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:47 crc kubenswrapper[4875]: I0130 16:56:47.967735 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:47 crc kubenswrapper[4875]: I0130 16:56:47.968823 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:47 crc kubenswrapper[4875]: I0130 16:56:47.968866 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:47 crc kubenswrapper[4875]: I0130 16:56:47.968877 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:47 crc kubenswrapper[4875]: I0130 16:56:47.974276 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:48 crc kubenswrapper[4875]: I0130 16:56:48.092536 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:37:17.62134362 +0000 UTC Jan 30 16:56:48 crc kubenswrapper[4875]: I0130 16:56:48.235388 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:48 crc kubenswrapper[4875]: I0130 16:56:48.236361 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:48 crc kubenswrapper[4875]: I0130 16:56:48.236395 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:48 crc kubenswrapper[4875]: I0130 16:56:48.236405 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:48 crc kubenswrapper[4875]: I0130 16:56:48.612894 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:48 crc kubenswrapper[4875]: I0130 16:56:48.613073 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:48 crc kubenswrapper[4875]: I0130 16:56:48.614611 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:48 crc kubenswrapper[4875]: I0130 16:56:48.614683 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:48 crc kubenswrapper[4875]: I0130 16:56:48.614698 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:49 crc kubenswrapper[4875]: I0130 16:56:49.093239 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:44:48.831636164 +0000 UTC Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.016681 4875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.019368 4875 trace.go:236] Trace[1991918898]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:56:34.667) (total time: 15351ms): Jan 30 16:56:50 crc kubenswrapper[4875]: Trace[1991918898]: ---"Objects listed" error: 15351ms (16:56:50.019) Jan 30 16:56:50 crc kubenswrapper[4875]: Trace[1991918898]: [15.351573411s] [15.351573411s] END Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.019395 4875 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.020478 4875 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.020642 4875 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.024085 4875 trace.go:236] Trace[1885054680]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:56:36.416) (total time: 13607ms): Jan 30 16:56:50 crc kubenswrapper[4875]: Trace[1885054680]: ---"Objects listed" error: 13607ms (16:56:50.023) Jan 30 16:56:50 crc kubenswrapper[4875]: Trace[1885054680]: [13.607822021s] [13.607822021s] END Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.024120 4875 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.026326 4875 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.030271 4875 trace.go:236] Trace[1723914566]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:56:37.388) (total time: 12641ms): Jan 30 16:56:50 crc kubenswrapper[4875]: Trace[1723914566]: ---"Objects listed" error: 12641ms (16:56:50.029) Jan 30 16:56:50 crc kubenswrapper[4875]: Trace[1723914566]: [12.641652425s] [12.641652425s] END Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.030306 4875 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.031630 4875 trace.go:236] Trace[603660296]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:56:39.063) (total time: 10968ms): Jan 30 16:56:50 crc kubenswrapper[4875]: Trace[603660296]: ---"Objects listed" error: 10968ms (16:56:50.031) Jan 30 16:56:50 crc kubenswrapper[4875]: Trace[603660296]: [10.968186073s] [10.968186073s] END Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.031675 4875 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.049958 4875 csr.go:261] certificate signing request csr-vm7fp is approved, waiting to be issued Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.058150 4875 csr.go:257] certificate signing request csr-vm7fp is issued Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.067914 4875 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.067979 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.071913 4875 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53014->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.072019 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53014->192.168.126.11:17697: read: connection reset by peer" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.072472 4875 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.072545 4875 apiserver.go:52] "Watching apiserver" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.072551 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.075409 4875 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.075772 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.076223 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.076388 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.076459 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.076531 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.076563 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.076531 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.076659 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.076924 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.077045 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.077788 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.078997 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.079076 4875 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.079471 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.079650 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.079706 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.079748 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.080968 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.081061 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.082383 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.093396 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:18:16.834266185 +0000 UTC Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.097389 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.103511 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.110472 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121085 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121147 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121181 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121213 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121239 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121262 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121285 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121309 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121342 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121406 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121434 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121461 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121489 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121512 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121537 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121560 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121614 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121654 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121685 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121713 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121765 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121795 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121830 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.121865 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.122565 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.122724 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.122760 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.122790 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.122820 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.122842 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.122865 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.122893 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.122917 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.122938 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.122962 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.122987 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123018 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123056 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123082 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123108 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123136 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123158 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123184 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123209 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123233 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123265 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123295 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123321 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123347 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123373 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123404 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123424 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123420 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123452 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123688 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123729 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123774 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123812 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123851 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.123882 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124022 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124056 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124074 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124109 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124137 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124160 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124183 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124208 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124230 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124257 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124299 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124322 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124346 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124381 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124402 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124431 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124449 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124456 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124482 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124501 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124524 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124546 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124565 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124606 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124632 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124646 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124654 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124714 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124736 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124773 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124796 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124820 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124841 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124872 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124900 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124925 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124954 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.124984 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125004 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125033 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125068 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125080 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125094 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125117 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125139 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125164 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125187 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125209 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125309 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125664 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125703 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125730 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125754 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125778 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125800 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125824 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125848 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125869 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125893 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125916 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125941 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125961 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.125983 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126006 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126025 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126049 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126049 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126070 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126127 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126153 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126178 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126201 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126222 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126247 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126272 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126292 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126316 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126340 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126361 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126367 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126387 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126427 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126463 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126499 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126540 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126622 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126640 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126656 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126691 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126714 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126749 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126777 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126798 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126823 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126855 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126880 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126903 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126929 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126948 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126973 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.126999 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127026 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127059 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127090 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127113 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127130 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127151 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127166 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127195 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127223 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127246 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127267 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127296 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127323 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127346 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127371 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127391 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127414 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127439 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127458 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127481 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127506 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127523 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127546 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127557 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127594 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127636 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127658 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127687 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127722 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127746 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127780 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127803 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127825 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127836 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127856 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127882 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127907 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127929 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127956 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.127985 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128013 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128040 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128063 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128084 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128085 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128106 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128134 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128158 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128225 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128254 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128279 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128294 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128306 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128335 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128360 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128383 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128404 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128428 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128450 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128473 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128491 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128501 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128524 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128552 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128653 4875 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128666 4875 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128678 4875 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128695 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128708 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128720 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128737 4875 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128753 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128766 4875 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128778 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128792 4875 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128803 4875 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128815 4875 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.128825 4875 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.128906 4875 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.128979 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.628956206 +0000 UTC m=+21.176319589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.129029 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.129751 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.129800 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.129872 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.129885 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.129974 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.130151 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.130883 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.130907 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.130959 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.130962 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.131047 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.131136 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.131825 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.132040 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.132105 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.132247 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.132437 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.132518 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.133135 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.133294 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.133406 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.133838 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.133890 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.133975 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.132725 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.134745 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.135239 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.135391 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.635365469 +0000 UTC m=+21.182728862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.137360 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.137475 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.137810 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.139191 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.139507 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.139788 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.139850 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.140062 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.140092 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.140290 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.140423 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.140525 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.140657 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.140675 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.140863 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.141003 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.141006 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.141139 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.141465 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.141547 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.141716 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.141914 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.142148 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.142387 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.142493 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.142693 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.142728 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.143050 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.143091 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.143348 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.143462 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.143489 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.143519 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.143626 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.143826 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.143826 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.143913 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.144358 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.144656 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.144992 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.145198 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.143752 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.145600 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.145626 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.146298 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.146351 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.146796 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.147859 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.148067 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.148215 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.148366 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.148630 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.148652 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.148765 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.148928 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.148981 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.149247 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.149375 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.149805 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.150772 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.151097 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.151300 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.151504 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.151638 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.151995 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.152111 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.152510 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.152782 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.153798 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.153649 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.153939 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.154446 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.148400 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.154930 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.155216 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.155372 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.155546 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.155639 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.155893 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.155912 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.156295 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.156795 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.156798 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.156843 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.160641 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.169442 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.169631 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.169685 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.169904 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.169995 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.170103 4875 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.170832 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.171009 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.174016 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.175834 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.176659 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.176715 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.176744 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.177627 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.179552 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.181190 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.182530 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.182613 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.183093 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.183287 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.183564 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.183839 4875 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.183846 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.183964 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.68393452 +0000 UTC m=+21.231298113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.184124 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.184144 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.184943 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.185037 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.185483 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.185516 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.185539 4875 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.185637 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.685608893 +0000 UTC m=+21.232972466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.185655 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.185485 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.187069 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.188282 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.188327 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.188341 4875 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.188408 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.68838434 +0000 UTC m=+21.235747723 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.191402 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.191625 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.192264 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.192304 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.192644 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.192982 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.193333 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.193965 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.194038 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.198675 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.198907 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.200097 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.200128 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.200343 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.201348 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.203004 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.203142 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.203693 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.204125 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.204474 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.204570 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.205009 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.205091 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.205352 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.205614 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.205990 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.206194 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.206343 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.206794 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.206864 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.206871 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.207076 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.207283 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.207395 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.207714 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.207402 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.208055 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.208217 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.208685 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.209416 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.209527 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.209990 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.210105 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.212038 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.212071 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.212421 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.214015 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.214186 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.214411 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.214969 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.215818 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.215922 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.217910 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.218982 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.219293 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.219821 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.220512 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.223477 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.224460 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.225717 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.228266 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.229102 4875 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.229327 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.229920 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230273 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230315 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230469 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230578 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230618 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230632 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230643 4875 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230654 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230665 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230676 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230688 4875 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230703 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230706 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230715 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230728 4875 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230739 4875 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230750 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230761 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230772 4875 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230783 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230794 4875 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230804 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230813 4875 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230824 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230837 4875 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230849 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230918 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230951 4875 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230967 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230983 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230999 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231011 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.230995 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231021 4875 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231164 4875 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231181 4875 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231198 4875 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231214 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231223 4875 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231238 4875 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231248 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231258 4875 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231268 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231295 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231306 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231316 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231326 4875 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231336 4875 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231380 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231390 4875 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231400 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231408 4875 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231418 4875 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231427 4875 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231438 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231452 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231468 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231482 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231493 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231509 4875 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231519 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231529 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231539 4875 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231608 4875 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231619 4875 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231642 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231664 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231675 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231723 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231758 4875 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231771 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231782 4875 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231799 4875 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231810 4875 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231820 4875 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231833 4875 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231845 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231855 4875 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231865 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231875 4875 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231885 4875 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231895 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231905 4875 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231915 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231925 4875 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231936 4875 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231947 4875 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231955 4875 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231965 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231976 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231985 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.231997 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232007 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232017 4875 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232028 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232040 4875 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232052 4875 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232065 4875 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232077 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232088 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232098 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232109 4875 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232119 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232129 4875 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232138 4875 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232147 4875 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232157 4875 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232166 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232205 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232216 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232227 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232238 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232247 4875 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232256 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232265 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232275 4875 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232284 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232295 4875 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232304 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232313 4875 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232323 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232343 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232351 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232362 4875 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232372 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232382 4875 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232393 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232404 4875 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232414 4875 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232422 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232431 4875 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232440 4875 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232449 4875 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232457 4875 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232470 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232481 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232490 4875 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232502 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232511 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232520 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232531 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232541 4875 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232550 4875 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232558 4875 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232568 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232577 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232598 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232608 4875 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232617 4875 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232626 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232635 4875 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232644 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232653 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232661 4875 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232671 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232680 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232689 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232700 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232709 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232717 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232727 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232736 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232745 4875 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232754 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232763 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232774 4875 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232783 4875 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232791 4875 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232800 4875 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232809 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232818 4875 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232827 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232837 4875 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232848 4875 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232856 4875 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232864 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232873 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232882 4875 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232890 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.232898 4875 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.234691 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.234966 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.234994 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.236110 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.237868 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.238400 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.241068 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.242933 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.243371 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.244296 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.245416 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.246485 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.247379 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.248083 4875 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d" exitCode=255 Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.248528 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.249301 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.250911 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.252897 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.253890 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.254091 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.255048 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.255737 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.256802 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.258219 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.258812 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.263518 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.264092 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.269470 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.270725 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.272419 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.280150 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.280233 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d"} Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.291060 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.295339 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-rzl5h"] Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.295644 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-9nnzd"] Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.295747 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-9wgsn"] Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.295997 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.296330 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rzl5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.296651 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9nnzd" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.303699 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.304032 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.308001 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.308255 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.308653 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.308834 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.309024 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.309165 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.309348 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.309487 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.309707 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.309874 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.331510 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.333817 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6705291-da0f-49bd-acc7-6c2e027a3b54-host\") pod \"node-ca-9nnzd\" (UID: \"f6705291-da0f-49bd-acc7-6c2e027a3b54\") " pod="openshift-image-registry/node-ca-9nnzd" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.333863 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8-proxy-tls\") pod \"machine-config-daemon-9wgsn\" (UID: \"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\") " pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.333888 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/92bbdc00-4565-4f08-90ef-b14644f90a87-hosts-file\") pod \"node-resolver-rzl5h\" (UID: \"92bbdc00-4565-4f08-90ef-b14644f90a87\") " pod="openshift-dns/node-resolver-rzl5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.333910 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8slsr\" (UniqueName: \"kubernetes.io/projected/92bbdc00-4565-4f08-90ef-b14644f90a87-kube-api-access-8slsr\") pod \"node-resolver-rzl5h\" (UID: \"92bbdc00-4565-4f08-90ef-b14644f90a87\") " pod="openshift-dns/node-resolver-rzl5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.333932 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8-rootfs\") pod \"machine-config-daemon-9wgsn\" (UID: \"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\") " pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.333974 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f6705291-da0f-49bd-acc7-6c2e027a3b54-serviceca\") pod \"node-ca-9nnzd\" (UID: \"f6705291-da0f-49bd-acc7-6c2e027a3b54\") " pod="openshift-image-registry/node-ca-9nnzd" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.333993 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fvbd\" (UniqueName: \"kubernetes.io/projected/f6705291-da0f-49bd-acc7-6c2e027a3b54-kube-api-access-7fvbd\") pod \"node-ca-9nnzd\" (UID: \"f6705291-da0f-49bd-acc7-6c2e027a3b54\") " pod="openshift-image-registry/node-ca-9nnzd" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.334016 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8-mcd-auth-proxy-config\") pod \"machine-config-daemon-9wgsn\" (UID: \"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\") " pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.334038 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnkzj\" (UniqueName: \"kubernetes.io/projected/9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8-kube-api-access-dnkzj\") pod \"machine-config-daemon-9wgsn\" (UID: \"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\") " pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.334192 4875 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.334232 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.334242 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.334251 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.356127 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.389937 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.389933 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.402465 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:56:50 crc kubenswrapper[4875]: W0130 16:56:50.404678 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-2f0191dbca859bbbb6a4802f47cba51c4ec390227a382dcff4e2be6006b34b5e WatchSource:0}: Error finding container 2f0191dbca859bbbb6a4802f47cba51c4ec390227a382dcff4e2be6006b34b5e: Status 404 returned error can't find the container with id 2f0191dbca859bbbb6a4802f47cba51c4ec390227a382dcff4e2be6006b34b5e Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.413970 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.415437 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.429755 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.436213 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6705291-da0f-49bd-acc7-6c2e027a3b54-host\") pod \"node-ca-9nnzd\" (UID: \"f6705291-da0f-49bd-acc7-6c2e027a3b54\") " pod="openshift-image-registry/node-ca-9nnzd" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.436783 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8-proxy-tls\") pod \"machine-config-daemon-9wgsn\" (UID: \"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\") " pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.436834 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/92bbdc00-4565-4f08-90ef-b14644f90a87-hosts-file\") pod \"node-resolver-rzl5h\" (UID: \"92bbdc00-4565-4f08-90ef-b14644f90a87\") " pod="openshift-dns/node-resolver-rzl5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.436863 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8slsr\" (UniqueName: \"kubernetes.io/projected/92bbdc00-4565-4f08-90ef-b14644f90a87-kube-api-access-8slsr\") pod \"node-resolver-rzl5h\" (UID: \"92bbdc00-4565-4f08-90ef-b14644f90a87\") " pod="openshift-dns/node-resolver-rzl5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.436891 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8-rootfs\") pod \"machine-config-daemon-9wgsn\" (UID: \"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\") " pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.436926 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f6705291-da0f-49bd-acc7-6c2e027a3b54-serviceca\") pod \"node-ca-9nnzd\" (UID: \"f6705291-da0f-49bd-acc7-6c2e027a3b54\") " pod="openshift-image-registry/node-ca-9nnzd" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.436952 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fvbd\" (UniqueName: \"kubernetes.io/projected/f6705291-da0f-49bd-acc7-6c2e027a3b54-kube-api-access-7fvbd\") pod \"node-ca-9nnzd\" (UID: \"f6705291-da0f-49bd-acc7-6c2e027a3b54\") " pod="openshift-image-registry/node-ca-9nnzd" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.436986 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8-mcd-auth-proxy-config\") pod \"machine-config-daemon-9wgsn\" (UID: \"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\") " pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.437012 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnkzj\" (UniqueName: \"kubernetes.io/projected/9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8-kube-api-access-dnkzj\") pod \"machine-config-daemon-9wgsn\" (UID: \"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\") " pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.436509 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6705291-da0f-49bd-acc7-6c2e027a3b54-host\") pod \"node-ca-9nnzd\" (UID: \"f6705291-da0f-49bd-acc7-6c2e027a3b54\") " pod="openshift-image-registry/node-ca-9nnzd" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.437291 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/92bbdc00-4565-4f08-90ef-b14644f90a87-hosts-file\") pod \"node-resolver-rzl5h\" (UID: \"92bbdc00-4565-4f08-90ef-b14644f90a87\") " pod="openshift-dns/node-resolver-rzl5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.437742 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8-rootfs\") pod \"machine-config-daemon-9wgsn\" (UID: \"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\") " pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.438427 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8-mcd-auth-proxy-config\") pod \"machine-config-daemon-9wgsn\" (UID: \"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\") " pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.438507 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f6705291-da0f-49bd-acc7-6c2e027a3b54-serviceca\") pod \"node-ca-9nnzd\" (UID: \"f6705291-da0f-49bd-acc7-6c2e027a3b54\") " pod="openshift-image-registry/node-ca-9nnzd" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.441376 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8-proxy-tls\") pod \"machine-config-daemon-9wgsn\" (UID: \"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\") " pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.446407 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.454864 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnkzj\" (UniqueName: \"kubernetes.io/projected/9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8-kube-api-access-dnkzj\") pod \"machine-config-daemon-9wgsn\" (UID: \"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\") " pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.455309 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fvbd\" (UniqueName: \"kubernetes.io/projected/f6705291-da0f-49bd-acc7-6c2e027a3b54-kube-api-access-7fvbd\") pod \"node-ca-9nnzd\" (UID: \"f6705291-da0f-49bd-acc7-6c2e027a3b54\") " pod="openshift-image-registry/node-ca-9nnzd" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.456354 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8slsr\" (UniqueName: \"kubernetes.io/projected/92bbdc00-4565-4f08-90ef-b14644f90a87-kube-api-access-8slsr\") pod \"node-resolver-rzl5h\" (UID: \"92bbdc00-4565-4f08-90ef-b14644f90a87\") " pod="openshift-dns/node-resolver-rzl5h" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.458782 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.460678 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.461191 4875 scope.go:117] "RemoveContainer" containerID="92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.474068 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.486784 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.519365 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.534828 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.544239 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.560543 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.572913 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.617470 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.625468 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rzl5h" Jan 30 16:56:50 crc kubenswrapper[4875]: W0130 16:56:50.632446 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cfabc70_3a7a_4fdb_bd21_f2648c9eabb8.slice/crio-64429d6604deb8ec03d8e4c68652f02760ca510a4152a4cdc31e262230be5945 WatchSource:0}: Error finding container 64429d6604deb8ec03d8e4c68652f02760ca510a4152a4cdc31e262230be5945: Status 404 returned error can't find the container with id 64429d6604deb8ec03d8e4c68652f02760ca510a4152a4cdc31e262230be5945 Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.633125 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9nnzd" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.637897 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.638024 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.638180 4875 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.638258 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.638234863 +0000 UTC m=+22.185598246 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.638334 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.638325466 +0000 UTC m=+22.185688859 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:50 crc kubenswrapper[4875]: W0130 16:56:50.651402 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92bbdc00_4565_4f08_90ef_b14644f90a87.slice/crio-cffb5389799f6bddd40ac515cfd3adc8cb6b9f5046d308091268f1bcbc335d0c WatchSource:0}: Error finding container cffb5389799f6bddd40ac515cfd3adc8cb6b9f5046d308091268f1bcbc335d0c: Status 404 returned error can't find the container with id cffb5389799f6bddd40ac515cfd3adc8cb6b9f5046d308091268f1bcbc335d0c Jan 30 16:56:50 crc kubenswrapper[4875]: W0130 16:56:50.654029 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6705291_da0f_49bd_acc7_6c2e027a3b54.slice/crio-c2f396486fe9028c5e2199e6e6901b6935eac5465704e6e87a1b6ebdee7f1173 WatchSource:0}: Error finding container c2f396486fe9028c5e2199e6e6901b6935eac5465704e6e87a1b6ebdee7f1173: Status 404 returned error can't find the container with id c2f396486fe9028c5e2199e6e6901b6935eac5465704e6e87a1b6ebdee7f1173 Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.739307 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.739349 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:50 crc kubenswrapper[4875]: I0130 16:56:50.739389 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.739518 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.739549 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.739543 4875 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.739564 4875 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.739636 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.73961682 +0000 UTC m=+22.286980203 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.739655 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.739647621 +0000 UTC m=+22.287011004 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.739678 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.739693 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.739701 4875 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:50 crc kubenswrapper[4875]: E0130 16:56:50.739740 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.739722243 +0000 UTC m=+22.287085626 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.059947 4875 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 16:51:50 +0000 UTC, rotation deadline is 2026-11-11 12:39:03.290586151 +0000 UTC Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.060034 4875 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6835h42m12.230555519s for next certificate rotation Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.066553 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-hqmqg"] Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.067227 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.068764 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.069140 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.069658 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.069903 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.069994 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.072242 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mps6c"] Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.073310 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ck4hq"] Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.073526 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.073865 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.075944 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.076252 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.076369 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.077116 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.077169 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.077124 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.077844 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.077942 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.078040 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.078056 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.093859 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 23:10:12.021274826 +0000 UTC Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.100401 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.110403 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.128352 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142254 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-systemd\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142301 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-var-lib-cni-multus\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142323 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-systemd-units\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142340 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovn-node-metrics-cert\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142356 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbb6z\" (UniqueName: \"kubernetes.io/projected/85cf29f6-017d-475a-b63c-cd1cab3c8132-kube-api-access-fbb6z\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142374 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-run-multus-certs\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142393 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-cni-bin\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142410 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142445 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-run-k8s-cni-cncf-io\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142464 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f2be659-2cd0-4935-bf58-3e7681692d9b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142496 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-slash\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142604 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-log-socket\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142714 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1f2be659-2cd0-4935-bf58-3e7681692d9b-os-release\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142792 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-ovn\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142832 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovnkube-config\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142856 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-var-lib-kubelet\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142873 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f2be659-2cd0-4935-bf58-3e7681692d9b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142890 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-cnibin\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142906 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-multus-socket-dir-parent\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142921 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-var-lib-cni-bin\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142960 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-run-netns\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142979 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-multus-conf-dir\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.142999 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1f2be659-2cd0-4935-bf58-3e7681692d9b-cnibin\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143050 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-system-cni-dir\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143075 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-os-release\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143097 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-openvswitch\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143115 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-node-log\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143146 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-var-lib-openvswitch\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143168 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-multus-cni-dir\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143186 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1f2be659-2cd0-4935-bf58-3e7681692d9b-system-cni-dir\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143202 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-etc-kubernetes\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143240 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-run-netns\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143270 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/562b7bc8-0631-497c-9b8a-05af82dcfff9-cni-binary-copy\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143289 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-env-overrides\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143306 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/562b7bc8-0631-497c-9b8a-05af82dcfff9-multus-daemon-config\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143324 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk4gt\" (UniqueName: \"kubernetes.io/projected/1f2be659-2cd0-4935-bf58-3e7681692d9b-kube-api-access-nk4gt\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143347 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-etc-openvswitch\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143363 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-kubelet\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143378 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovnkube-script-lib\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143392 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-hostroot\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143409 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrgk\" (UniqueName: \"kubernetes.io/projected/562b7bc8-0631-497c-9b8a-05af82dcfff9-kube-api-access-mnrgk\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143436 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-run-ovn-kubernetes\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143453 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1f2be659-2cd0-4935-bf58-3e7681692d9b-cni-binary-copy\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.143470 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-cni-netd\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.158605 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.176271 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.204827 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.220092 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.234452 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245026 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-var-lib-cni-bin\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245080 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-cnibin\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245111 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-multus-socket-dir-parent\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245134 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-multus-conf-dir\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245156 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1f2be659-2cd0-4935-bf58-3e7681692d9b-cnibin\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245188 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-run-netns\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245182 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-var-lib-cni-bin\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245277 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-openvswitch\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245216 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-openvswitch\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245295 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-cnibin\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245329 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-node-log\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245355 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-system-cni-dir\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245374 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-os-release\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245397 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-multus-cni-dir\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245416 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1f2be659-2cd0-4935-bf58-3e7681692d9b-system-cni-dir\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245462 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-var-lib-openvswitch\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245488 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-etc-kubernetes\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245513 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-run-netns\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245545 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/562b7bc8-0631-497c-9b8a-05af82dcfff9-cni-binary-copy\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245565 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk4gt\" (UniqueName: \"kubernetes.io/projected/1f2be659-2cd0-4935-bf58-3e7681692d9b-kube-api-access-nk4gt\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245571 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-multus-conf-dir\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245610 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-etc-openvswitch\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245631 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-env-overrides\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245632 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1f2be659-2cd0-4935-bf58-3e7681692d9b-cnibin\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245648 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/562b7bc8-0631-497c-9b8a-05af82dcfff9-multus-daemon-config\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245662 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-run-netns\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245671 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovnkube-script-lib\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245692 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-hostroot\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245694 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-etc-kubernetes\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245710 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrgk\" (UniqueName: \"kubernetes.io/projected/562b7bc8-0631-497c-9b8a-05af82dcfff9-kube-api-access-mnrgk\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245722 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-node-log\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245759 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-system-cni-dir\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245761 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-kubelet\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245784 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-kubelet\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245795 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-run-ovn-kubernetes\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245815 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-run-netns\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245822 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1f2be659-2cd0-4935-bf58-3e7681692d9b-cni-binary-copy\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245845 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-cni-netd\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245864 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-systemd\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245883 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-var-lib-cni-multus\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245900 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbb6z\" (UniqueName: \"kubernetes.io/projected/85cf29f6-017d-475a-b63c-cd1cab3c8132-kube-api-access-fbb6z\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245921 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-run-multus-certs\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245940 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-systemd-units\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245957 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovn-node-metrics-cert\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245976 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-run-k8s-cni-cncf-io\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245995 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f2be659-2cd0-4935-bf58-3e7681692d9b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.246013 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-slash\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.246033 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-log-socket\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.246053 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-cni-bin\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.246073 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.246097 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-ovn\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.246118 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovnkube-config\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.246139 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1f2be659-2cd0-4935-bf58-3e7681692d9b-os-release\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.246162 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-var-lib-kubelet\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.246183 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f2be659-2cd0-4935-bf58-3e7681692d9b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.246559 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.245543 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-multus-socket-dir-parent\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247145 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-hostroot\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247081 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/562b7bc8-0631-497c-9b8a-05af82dcfff9-cni-binary-copy\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247252 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-systemd-units\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247290 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-cni-bin\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247329 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1f2be659-2cd0-4935-bf58-3e7681692d9b-system-cni-dir\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247263 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-etc-openvswitch\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247381 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-slash\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247386 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-log-socket\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247424 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-cni-netd\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247435 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-var-lib-openvswitch\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247461 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1f2be659-2cd0-4935-bf58-3e7681692d9b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247472 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-run-ovn-kubernetes\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247498 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-var-lib-cni-multus\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247468 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-systemd\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247520 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247719 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-os-release\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247771 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-var-lib-kubelet\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247782 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-ovn\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247727 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-multus-cni-dir\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247802 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1f2be659-2cd0-4935-bf58-3e7681692d9b-os-release\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247819 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-run-k8s-cni-cncf-io\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247797 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1f2be659-2cd0-4935-bf58-3e7681692d9b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247822 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/562b7bc8-0631-497c-9b8a-05af82dcfff9-host-run-multus-certs\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.247819 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1f2be659-2cd0-4935-bf58-3e7681692d9b-cni-binary-copy\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.248092 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/562b7bc8-0631-497c-9b8a-05af82dcfff9-multus-daemon-config\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.248192 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-env-overrides\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.248924 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovnkube-config\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.249018 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovnkube-script-lib\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.253640 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9nnzd" event={"ID":"f6705291-da0f-49bd-acc7-6c2e027a3b54","Type":"ContainerStarted","Data":"75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.254222 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9nnzd" event={"ID":"f6705291-da0f-49bd-acc7-6c2e027a3b54","Type":"ContainerStarted","Data":"c2f396486fe9028c5e2199e6e6901b6935eac5465704e6e87a1b6ebdee7f1173"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.254918 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovn-node-metrics-cert\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.256808 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerStarted","Data":"db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.256921 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerStarted","Data":"5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.256991 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerStarted","Data":"64429d6604deb8ec03d8e4c68652f02760ca510a4152a4cdc31e262230be5945"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.259344 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.259406 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2f0191dbca859bbbb6a4802f47cba51c4ec390227a382dcff4e2be6006b34b5e"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.260849 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rzl5h" event={"ID":"92bbdc00-4565-4f08-90ef-b14644f90a87","Type":"ContainerStarted","Data":"2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.260909 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rzl5h" event={"ID":"92bbdc00-4565-4f08-90ef-b14644f90a87","Type":"ContainerStarted","Data":"cffb5389799f6bddd40ac515cfd3adc8cb6b9f5046d308091268f1bcbc335d0c"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.261881 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"375a26fb08452f31e6930a17dc2d075f08e219674fab7a84c734fd7113fef6e6"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.263636 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.263677 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.263695 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bbd383c890c18237e826e8652658bb240a359717d18fea8a1ac88b35a8d61809"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.266143 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.268850 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk4gt\" (UniqueName: \"kubernetes.io/projected/1f2be659-2cd0-4935-bf58-3e7681692d9b-kube-api-access-nk4gt\") pod \"multus-additional-cni-plugins-hqmqg\" (UID: \"1f2be659-2cd0-4935-bf58-3e7681692d9b\") " pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.269137 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7"} Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.269347 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.273517 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrgk\" (UniqueName: \"kubernetes.io/projected/562b7bc8-0631-497c-9b8a-05af82dcfff9-kube-api-access-mnrgk\") pod \"multus-ck4hq\" (UID: \"562b7bc8-0631-497c-9b8a-05af82dcfff9\") " pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.276421 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbb6z\" (UniqueName: \"kubernetes.io/projected/85cf29f6-017d-475a-b63c-cd1cab3c8132-kube-api-access-fbb6z\") pod \"ovnkube-node-mps6c\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.279237 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.279425 4875 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.292984 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.305113 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.313811 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.326919 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.345506 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.361317 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.373349 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.384657 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.386429 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.395153 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.402132 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ck4hq" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.403760 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:51 crc kubenswrapper[4875]: W0130 16:56:51.414113 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85cf29f6_017d_475a_b63c_cd1cab3c8132.slice/crio-fb31988c8c373b3caffe3d25e35a9a4e043b0809bc35df330374eb0cf72cb0af WatchSource:0}: Error finding container fb31988c8c373b3caffe3d25e35a9a4e043b0809bc35df330374eb0cf72cb0af: Status 404 returned error can't find the container with id fb31988c8c373b3caffe3d25e35a9a4e043b0809bc35df330374eb0cf72cb0af Jan 30 16:56:51 crc kubenswrapper[4875]: W0130 16:56:51.419304 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod562b7bc8_0631_497c_9b8a_05af82dcfff9.slice/crio-7c98e916316d5af0475adeff9345336dc8b022447ffd2afbe4f0eaa4370d9e07 WatchSource:0}: Error finding container 7c98e916316d5af0475adeff9345336dc8b022447ffd2afbe4f0eaa4370d9e07: Status 404 returned error can't find the container with id 7c98e916316d5af0475adeff9345336dc8b022447ffd2afbe4f0eaa4370d9e07 Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.420875 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.439392 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.455486 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.477671 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.498658 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.517192 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.654771 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.654955 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.655045 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.654997551 +0000 UTC m=+24.202360934 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.655117 4875 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.655236 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.655207228 +0000 UTC m=+24.202570781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.756058 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.756128 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:51 crc kubenswrapper[4875]: I0130 16:56:51.756159 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.756305 4875 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.756366 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.756390 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.756449 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.756467 4875 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.756407 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.756527 4875 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.756386 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.756363777 +0000 UTC m=+24.303727160 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.756650 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.756623025 +0000 UTC m=+24.303986538 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:51 crc kubenswrapper[4875]: E0130 16:56:51.756685 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.756669697 +0000 UTC m=+24.304033280 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.094650 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:46:12.745854128 +0000 UTC Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.135518 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.135602 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.135599 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:52 crc kubenswrapper[4875]: E0130 16:56:52.135700 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:52 crc kubenswrapper[4875]: E0130 16:56:52.135938 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:52 crc kubenswrapper[4875]: E0130 16:56:52.135869 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.144664 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.145553 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.146317 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.147228 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.148621 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.149207 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.149800 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.150933 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.151558 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.152670 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.153418 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.154112 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.155141 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.155847 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.157290 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.273570 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ck4hq" event={"ID":"562b7bc8-0631-497c-9b8a-05af82dcfff9","Type":"ContainerStarted","Data":"3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a"} Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.273683 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ck4hq" event={"ID":"562b7bc8-0631-497c-9b8a-05af82dcfff9","Type":"ContainerStarted","Data":"7c98e916316d5af0475adeff9345336dc8b022447ffd2afbe4f0eaa4370d9e07"} Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.275646 4875 generic.go:334] "Generic (PLEG): container finished" podID="1f2be659-2cd0-4935-bf58-3e7681692d9b" containerID="e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78" exitCode=0 Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.275766 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" event={"ID":"1f2be659-2cd0-4935-bf58-3e7681692d9b","Type":"ContainerDied","Data":"e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78"} Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.275851 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" event={"ID":"1f2be659-2cd0-4935-bf58-3e7681692d9b","Type":"ContainerStarted","Data":"fcb6bcea94e7ea68fa5aca20827ca753852ff294ca85f2e77db910277043e04a"} Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.277852 4875 generic.go:334] "Generic (PLEG): container finished" podID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerID="0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918" exitCode=0 Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.277932 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918"} Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.277989 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerStarted","Data":"fb31988c8c373b3caffe3d25e35a9a4e043b0809bc35df330374eb0cf72cb0af"} Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.297892 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.317653 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.339620 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.396998 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.458201 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.482239 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.506901 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.524349 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.544088 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.566836 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.589579 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.607146 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.631043 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.657518 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.684071 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.699306 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.716230 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.736879 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.767710 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.789977 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.805422 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.824021 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.847198 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.864361 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.887396 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.901880 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.922278 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:52 crc kubenswrapper[4875]: I0130 16:56:52.949814 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.096108 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:42:29.096358242 +0000 UTC Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.284004 4875 generic.go:334] "Generic (PLEG): container finished" podID="1f2be659-2cd0-4935-bf58-3e7681692d9b" containerID="0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc" exitCode=0 Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.284109 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" event={"ID":"1f2be659-2cd0-4935-bf58-3e7681692d9b","Type":"ContainerDied","Data":"0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc"} Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.290520 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerStarted","Data":"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa"} Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.290565 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerStarted","Data":"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f"} Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.290599 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerStarted","Data":"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88"} Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.290614 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerStarted","Data":"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6"} Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.290623 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerStarted","Data":"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc"} Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.290632 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerStarted","Data":"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7"} Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.304158 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.321064 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.345087 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.359682 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.372784 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.387287 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.400776 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.416290 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.435481 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.456606 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.478817 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.505037 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.527530 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.545482 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.680416 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:53 crc kubenswrapper[4875]: E0130 16:56:53.680613 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:57.680567974 +0000 UTC m=+28.227931357 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.680673 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:53 crc kubenswrapper[4875]: E0130 16:56:53.680789 4875 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:56:53 crc kubenswrapper[4875]: E0130 16:56:53.680844 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:57.680827392 +0000 UTC m=+28.228190775 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.782113 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.782169 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.782206 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:53 crc kubenswrapper[4875]: E0130 16:56:53.782324 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:56:53 crc kubenswrapper[4875]: E0130 16:56:53.782350 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:56:53 crc kubenswrapper[4875]: E0130 16:56:53.782362 4875 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:53 crc kubenswrapper[4875]: E0130 16:56:53.782324 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:56:53 crc kubenswrapper[4875]: E0130 16:56:53.782438 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:56:53 crc kubenswrapper[4875]: E0130 16:56:53.782441 4875 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:56:53 crc kubenswrapper[4875]: E0130 16:56:53.782455 4875 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:53 crc kubenswrapper[4875]: E0130 16:56:53.782418 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:57.782402885 +0000 UTC m=+28.329766268 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:53 crc kubenswrapper[4875]: E0130 16:56:53.782635 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:57.782579881 +0000 UTC m=+28.329943254 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:56:53 crc kubenswrapper[4875]: E0130 16:56:53.782661 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:57.782651983 +0000 UTC m=+28.330015366 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.902681 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.915889 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.917637 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.920382 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.932922 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.981008 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:53 crc kubenswrapper[4875]: I0130 16:56:53.995150 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.008099 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.020565 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.036364 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.058456 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.074005 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.090410 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.097240 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:20:57.612012064 +0000 UTC Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.104998 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.118226 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.133783 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.136069 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.136268 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:54 crc kubenswrapper[4875]: E0130 16:56:54.136372 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.136402 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:54 crc kubenswrapper[4875]: E0130 16:56:54.136540 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:54 crc kubenswrapper[4875]: E0130 16:56:54.136790 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.150696 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.169915 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.190255 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.213487 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.229480 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.243557 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.254539 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.265202 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.278914 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.290204 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.297682 4875 generic.go:334] "Generic (PLEG): container finished" podID="1f2be659-2cd0-4935-bf58-3e7681692d9b" containerID="80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5" exitCode=0 Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.297761 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" event={"ID":"1f2be659-2cd0-4935-bf58-3e7681692d9b","Type":"ContainerDied","Data":"80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5"} Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.300351 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c"} Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.309441 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.332557 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.345915 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.357191 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.372786 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.398831 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.416999 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.430981 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.445458 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.459731 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.475244 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.494497 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.511027 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.525273 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.549618 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.563957 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.577293 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.590012 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.603649 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.616175 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:54 crc kubenswrapper[4875]: I0130 16:56:54.646263 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.097384 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 22:28:05.248520563 +0000 UTC Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.307637 4875 generic.go:334] "Generic (PLEG): container finished" podID="1f2be659-2cd0-4935-bf58-3e7681692d9b" containerID="9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905" exitCode=0 Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.307737 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" event={"ID":"1f2be659-2cd0-4935-bf58-3e7681692d9b","Type":"ContainerDied","Data":"9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905"} Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.320098 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerStarted","Data":"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6"} Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.332747 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.348622 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.359921 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.372932 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.393021 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.413255 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.427318 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.443652 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.464363 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.480395 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.497473 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.510969 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.527872 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.542710 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:55 crc kubenswrapper[4875]: I0130 16:56:55.555176 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:55Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.098282 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:25:44.886414942 +0000 UTC Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.136409 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:56 crc kubenswrapper[4875]: E0130 16:56:56.136604 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.136671 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.136743 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:56 crc kubenswrapper[4875]: E0130 16:56:56.136832 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:56 crc kubenswrapper[4875]: E0130 16:56:56.136921 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.328243 4875 generic.go:334] "Generic (PLEG): container finished" podID="1f2be659-2cd0-4935-bf58-3e7681692d9b" containerID="e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2" exitCode=0 Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.328318 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" event={"ID":"1f2be659-2cd0-4935-bf58-3e7681692d9b","Type":"ContainerDied","Data":"e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2"} Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.348977 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.365069 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.384752 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.400520 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.420636 4875 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.421802 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.424330 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.424372 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.424383 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.424520 4875 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.435302 4875 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.435784 4875 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.436306 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.437378 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.437407 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.437417 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.437435 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.437477 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:56Z","lastTransitionTime":"2026-01-30T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.451038 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: E0130 16:56:56.451918 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.460153 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.460184 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.460195 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.460211 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.460224 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:56Z","lastTransitionTime":"2026-01-30T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.465949 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: E0130 16:56:56.472434 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.477947 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.477975 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.477985 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.478001 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.478015 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:56Z","lastTransitionTime":"2026-01-30T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.478118 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: E0130 16:56:56.494238 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.498342 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.498407 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.498422 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.498446 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.498293 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.498460 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:56Z","lastTransitionTime":"2026-01-30T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:56 crc kubenswrapper[4875]: E0130 16:56:56.511487 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.513559 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.516657 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.517060 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.517073 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.517093 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.517106 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:56Z","lastTransitionTime":"2026-01-30T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.529608 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: E0130 16:56:56.531778 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: E0130 16:56:56.531898 4875 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.535645 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.535683 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.535694 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.535714 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.535741 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:56Z","lastTransitionTime":"2026-01-30T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.545792 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.559014 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.575934 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.638080 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.638128 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.638137 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.638151 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.638162 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:56Z","lastTransitionTime":"2026-01-30T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.741470 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.741530 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.741545 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.741576 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.741655 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:56Z","lastTransitionTime":"2026-01-30T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.844207 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.844267 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.844285 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.844313 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.844331 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:56Z","lastTransitionTime":"2026-01-30T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.947785 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.947892 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.947916 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.947954 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:56 crc kubenswrapper[4875]: I0130 16:56:56.947981 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:56Z","lastTransitionTime":"2026-01-30T16:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.050887 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.050946 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.050971 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.050990 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.051003 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:57Z","lastTransitionTime":"2026-01-30T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.099259 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:30:33.906797373 +0000 UTC Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.154072 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.154119 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.154130 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.154153 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.154167 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:57Z","lastTransitionTime":"2026-01-30T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.258001 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.258085 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.258100 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.258123 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.258336 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:57Z","lastTransitionTime":"2026-01-30T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.339250 4875 generic.go:334] "Generic (PLEG): container finished" podID="1f2be659-2cd0-4935-bf58-3e7681692d9b" containerID="648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6" exitCode=0 Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.339310 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" event={"ID":"1f2be659-2cd0-4935-bf58-3e7681692d9b","Type":"ContainerDied","Data":"648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6"} Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.358725 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.361526 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.361574 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.361604 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.361620 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.361630 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:57Z","lastTransitionTime":"2026-01-30T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.379052 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.397060 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.411331 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.426290 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.455264 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.466090 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.466175 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.466194 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.466694 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.466896 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:57Z","lastTransitionTime":"2026-01-30T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.470837 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.485711 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.496992 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.520572 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.539612 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.553524 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.568671 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.569989 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.570024 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.570033 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.570076 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.570088 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:57Z","lastTransitionTime":"2026-01-30T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.582983 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.602277 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.672406 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.672452 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.672462 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.672476 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.672487 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:57Z","lastTransitionTime":"2026-01-30T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.728681 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:57 crc kubenswrapper[4875]: E0130 16:56:57.728932 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:57:05.728895322 +0000 UTC m=+36.276258705 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.728994 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:57 crc kubenswrapper[4875]: E0130 16:56:57.729115 4875 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:56:57 crc kubenswrapper[4875]: E0130 16:56:57.729251 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:05.729228803 +0000 UTC m=+36.276592236 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.776048 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.776106 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.776122 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.776149 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.776164 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:57Z","lastTransitionTime":"2026-01-30T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.830523 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.830614 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.830648 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:57 crc kubenswrapper[4875]: E0130 16:56:57.830758 4875 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:56:57 crc kubenswrapper[4875]: E0130 16:56:57.830884 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:05.830859148 +0000 UTC m=+36.378222531 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:56:57 crc kubenswrapper[4875]: E0130 16:56:57.830780 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:56:57 crc kubenswrapper[4875]: E0130 16:56:57.830933 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:56:57 crc kubenswrapper[4875]: E0130 16:56:57.830784 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:56:57 crc kubenswrapper[4875]: E0130 16:56:57.831046 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:56:57 crc kubenswrapper[4875]: E0130 16:56:57.830952 4875 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:57 crc kubenswrapper[4875]: E0130 16:56:57.831121 4875 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:57 crc kubenswrapper[4875]: E0130 16:56:57.831134 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:05.831108636 +0000 UTC m=+36.378472019 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:57 crc kubenswrapper[4875]: E0130 16:56:57.831207 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:05.831178978 +0000 UTC m=+36.378542361 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.878659 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.878707 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.878723 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.878744 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.878758 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:57Z","lastTransitionTime":"2026-01-30T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.982206 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.982298 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.982312 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.982331 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:57 crc kubenswrapper[4875]: I0130 16:56:57.982344 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:57Z","lastTransitionTime":"2026-01-30T16:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.085324 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.085392 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.085415 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.085445 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.085476 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:58Z","lastTransitionTime":"2026-01-30T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.099795 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:47:43.441720553 +0000 UTC Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.135948 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.136038 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.135982 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:58 crc kubenswrapper[4875]: E0130 16:56:58.136185 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:58 crc kubenswrapper[4875]: E0130 16:56:58.136292 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:58 crc kubenswrapper[4875]: E0130 16:56:58.136428 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.188813 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.189223 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.189236 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.189254 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.189265 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:58Z","lastTransitionTime":"2026-01-30T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.292509 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.292572 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.292631 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.292666 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.292693 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:58Z","lastTransitionTime":"2026-01-30T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.348920 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerStarted","Data":"293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9"} Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.349352 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.355231 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" event={"ID":"1f2be659-2cd0-4935-bf58-3e7681692d9b","Type":"ContainerStarted","Data":"c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015"} Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.368777 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.396455 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.396527 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.396545 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.396571 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.396607 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:58Z","lastTransitionTime":"2026-01-30T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.410156 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.414188 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.424707 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.447701 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.473061 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.489516 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.498458 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.498495 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.498505 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.498521 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.498532 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:58Z","lastTransitionTime":"2026-01-30T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.523504 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.538846 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.554713 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.569286 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.581770 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.598501 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.600880 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.600908 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.600920 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.600938 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.600950 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:58Z","lastTransitionTime":"2026-01-30T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.617090 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.629638 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.647845 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.663508 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.684224 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.700323 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.704002 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.704038 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.704049 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.704066 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.704078 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:58Z","lastTransitionTime":"2026-01-30T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.718149 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.747756 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.769504 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.787537 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.804558 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.806219 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.806272 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.806285 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.806309 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.806324 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:58Z","lastTransitionTime":"2026-01-30T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.823829 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.839875 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.856302 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.872062 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.885981 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.900634 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.910112 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.910150 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.910159 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.910177 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.910188 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:58Z","lastTransitionTime":"2026-01-30T16:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:58 crc kubenswrapper[4875]: I0130 16:56:58.921232 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.017320 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.017376 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.017387 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.017407 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.017423 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:59Z","lastTransitionTime":"2026-01-30T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.100617 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:24:36.150811178 +0000 UTC Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.120737 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.120788 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.120805 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.120824 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.120836 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:59Z","lastTransitionTime":"2026-01-30T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.225031 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.225105 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.225130 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.225163 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.225186 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:59Z","lastTransitionTime":"2026-01-30T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.327985 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.328041 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.328060 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.328086 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.328106 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:59Z","lastTransitionTime":"2026-01-30T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.359404 4875 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.360229 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.381032 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.394935 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.410044 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.427572 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.430250 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.430293 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.430302 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.430319 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.430329 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:59Z","lastTransitionTime":"2026-01-30T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.445989 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.461573 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.473251 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.487089 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.506601 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.522317 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.532817 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.532866 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.532881 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.532898 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.532909 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:59Z","lastTransitionTime":"2026-01-30T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.543051 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.559901 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.573287 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.585132 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.598437 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.613335 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.635795 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.635853 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.635870 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.635894 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.635910 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:59Z","lastTransitionTime":"2026-01-30T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.738318 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.738378 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.738387 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.738405 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.738418 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:59Z","lastTransitionTime":"2026-01-30T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.841603 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.841688 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.841699 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.841714 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.841726 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:59Z","lastTransitionTime":"2026-01-30T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.932727 4875 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.943928 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.943964 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.943975 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.944002 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:59 crc kubenswrapper[4875]: I0130 16:56:59.944015 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:59Z","lastTransitionTime":"2026-01-30T16:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.047020 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.047087 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.047104 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.047128 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.047143 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:00Z","lastTransitionTime":"2026-01-30T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.101784 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:19:56.474349164 +0000 UTC Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.135446 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.135482 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.135508 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:00 crc kubenswrapper[4875]: E0130 16:57:00.135716 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:00 crc kubenswrapper[4875]: E0130 16:57:00.136230 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:00 crc kubenswrapper[4875]: E0130 16:57:00.136321 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.150972 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.151018 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.151028 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.151049 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.151063 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:00Z","lastTransitionTime":"2026-01-30T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.162540 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.172224 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.187021 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.202422 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.214972 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.230492 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.249482 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.257964 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.258003 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.258018 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.258036 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.258049 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:00Z","lastTransitionTime":"2026-01-30T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.289477 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.304815 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.323379 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.339007 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.353383 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.364648 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.364707 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.364720 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.364741 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.364758 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:00Z","lastTransitionTime":"2026-01-30T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.371564 4875 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.381638 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.399744 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.414057 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.468007 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.468076 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.468089 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.468108 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.468119 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:00Z","lastTransitionTime":"2026-01-30T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.571503 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.571568 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.571601 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.571623 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.571637 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:00Z","lastTransitionTime":"2026-01-30T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.674413 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.674444 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.674452 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.674466 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.674474 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:00Z","lastTransitionTime":"2026-01-30T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.777334 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.777373 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.777384 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.777403 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.777419 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:00Z","lastTransitionTime":"2026-01-30T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.880873 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.880948 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.880984 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.881019 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.881052 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:00Z","lastTransitionTime":"2026-01-30T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.984313 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.984390 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.984403 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.984426 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:00 crc kubenswrapper[4875]: I0130 16:57:00.984439 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:00Z","lastTransitionTime":"2026-01-30T16:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.087489 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.087564 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.087608 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.087637 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.087658 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:01Z","lastTransitionTime":"2026-01-30T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.103024 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 05:13:12.589299919 +0000 UTC Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.191182 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.191234 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.191244 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.191263 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.191275 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:01Z","lastTransitionTime":"2026-01-30T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.294764 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.294836 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.294852 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.294875 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.294889 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:01Z","lastTransitionTime":"2026-01-30T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.377107 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/0.log" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.379985 4875 generic.go:334] "Generic (PLEG): container finished" podID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerID="293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9" exitCode=1 Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.380032 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9"} Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.380707 4875 scope.go:117] "RemoveContainer" containerID="293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.397021 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.397082 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.397105 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.397141 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.397169 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:01Z","lastTransitionTime":"2026-01-30T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.402181 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.423066 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.440978 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.456831 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.471399 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.484204 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.497235 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.500049 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.500096 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.500109 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.500131 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.500179 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:01Z","lastTransitionTime":"2026-01-30T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.514680 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.535314 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:00Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.578459 6167 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.578529 6167 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:57:00.578571 6167 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.578629 6167 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.578756 6167 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.579343 6167 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:57:00.579375 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 16:57:00.579386 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 16:57:00.579435 6167 factory.go:656] Stopping watch factory\\\\nI0130 16:57:00.579442 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 16:57:00.579456 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:57:00.579462 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.560106 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.575546 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.589727 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.602820 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.602857 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.602867 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.602882 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.602899 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:01Z","lastTransitionTime":"2026-01-30T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.614940 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.630578 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.644196 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:01Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.710882 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.710933 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.710944 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.710963 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.710974 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:01Z","lastTransitionTime":"2026-01-30T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.824031 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.824203 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.824217 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.824575 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.824718 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:01Z","lastTransitionTime":"2026-01-30T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.927748 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.927806 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.927818 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.927835 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:01 crc kubenswrapper[4875]: I0130 16:57:01.927849 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:01Z","lastTransitionTime":"2026-01-30T16:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.031150 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.031211 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.031223 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.031245 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.031293 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:02Z","lastTransitionTime":"2026-01-30T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.104193 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:56:12.233545229 +0000 UTC Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.134122 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.134174 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.134185 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.134206 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.134220 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:02Z","lastTransitionTime":"2026-01-30T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.135556 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.135614 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.135620 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:02 crc kubenswrapper[4875]: E0130 16:57:02.135731 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:02 crc kubenswrapper[4875]: E0130 16:57:02.135835 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:02 crc kubenswrapper[4875]: E0130 16:57:02.135951 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.238393 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.238445 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.238460 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.238483 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.238496 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:02Z","lastTransitionTime":"2026-01-30T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.324430 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2"] Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.325081 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.328534 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.329259 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.341373 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.341418 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.341429 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.341446 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.341458 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:02Z","lastTransitionTime":"2026-01-30T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.353777 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.369436 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.386565 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/1.log" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.387284 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/0.log" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.389094 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd5fp\" (UniqueName: \"kubernetes.io/projected/92a13cd1-8c0d-4eab-b29c-5fe6d1598629-kube-api-access-qd5fp\") pod \"ovnkube-control-plane-749d76644c-5rzl2\" (UID: \"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.389144 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/92a13cd1-8c0d-4eab-b29c-5fe6d1598629-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5rzl2\" (UID: \"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.389207 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/92a13cd1-8c0d-4eab-b29c-5fe6d1598629-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5rzl2\" (UID: \"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.389251 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/92a13cd1-8c0d-4eab-b29c-5fe6d1598629-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5rzl2\" (UID: \"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.391352 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.391500 4875 generic.go:334] "Generic (PLEG): container finished" podID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerID="ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd" exitCode=1 Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.391541 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd"} Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.391749 4875 scope.go:117] "RemoveContainer" containerID="293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.392348 4875 scope.go:117] "RemoveContainer" containerID="ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd" Jan 30 16:57:02 crc kubenswrapper[4875]: E0130 16:57:02.392720 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.429796 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.444379 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.444438 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.444453 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.444476 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.444488 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:02Z","lastTransitionTime":"2026-01-30T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.446223 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.463657 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.481382 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.489651 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/92a13cd1-8c0d-4eab-b29c-5fe6d1598629-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5rzl2\" (UID: \"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.489689 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/92a13cd1-8c0d-4eab-b29c-5fe6d1598629-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5rzl2\" (UID: \"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.489743 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd5fp\" (UniqueName: \"kubernetes.io/projected/92a13cd1-8c0d-4eab-b29c-5fe6d1598629-kube-api-access-qd5fp\") pod \"ovnkube-control-plane-749d76644c-5rzl2\" (UID: \"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.489783 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/92a13cd1-8c0d-4eab-b29c-5fe6d1598629-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5rzl2\" (UID: \"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.490683 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/92a13cd1-8c0d-4eab-b29c-5fe6d1598629-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5rzl2\" (UID: \"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.490824 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/92a13cd1-8c0d-4eab-b29c-5fe6d1598629-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5rzl2\" (UID: \"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.496637 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.503152 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/92a13cd1-8c0d-4eab-b29c-5fe6d1598629-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5rzl2\" (UID: \"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.508973 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd5fp\" (UniqueName: \"kubernetes.io/projected/92a13cd1-8c0d-4eab-b29c-5fe6d1598629-kube-api-access-qd5fp\") pod \"ovnkube-control-plane-749d76644c-5rzl2\" (UID: \"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.509689 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.525408 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.539236 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.552145 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.552192 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.552207 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.552229 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.552244 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:02Z","lastTransitionTime":"2026-01-30T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.555690 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.568415 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.579929 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.596039 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.615252 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:00Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.578459 6167 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.578529 6167 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:57:00.578571 6167 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.578629 6167 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.578756 6167 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.579343 6167 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:57:00.579375 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 16:57:00.579386 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 16:57:00.579435 6167 factory.go:656] Stopping watch factory\\\\nI0130 16:57:00.579442 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 16:57:00.579456 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:57:00.579462 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.629023 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.647043 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.648317 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.654145 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.654174 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.654183 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.654197 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.654206 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:02Z","lastTransitionTime":"2026-01-30T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.668008 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.683632 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.696186 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.713963 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.729165 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.744984 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.759167 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.759219 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.759228 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.759247 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.759260 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:02Z","lastTransitionTime":"2026-01-30T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.762813 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.779062 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.819255 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.845703 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.866821 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.866874 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.866887 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.866910 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.866926 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:02Z","lastTransitionTime":"2026-01-30T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.881714 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.891541 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.906899 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.925144 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://293d30342857b25629e12c5c43af186ef33a9f30db2e0e8150b2c267f27f9ed9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:00Z\\\",\\\"message\\\":\\\"rom k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.578459 6167 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.578529 6167 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:57:00.578571 6167 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.578629 6167 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.578756 6167 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:57:00.579343 6167 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0130 16:57:00.579375 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0130 16:57:00.579386 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0130 16:57:00.579435 6167 factory.go:656] Stopping watch factory\\\\nI0130 16:57:00.579442 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0130 16:57:00.579456 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:57:00.579462 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0130 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"rnal_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:57:02.225373 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.976711 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.976765 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.976781 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.976803 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:02 crc kubenswrapper[4875]: I0130 16:57:02.976819 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:02Z","lastTransitionTime":"2026-01-30T16:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.079666 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.079707 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.079717 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.079732 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.079744 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:03Z","lastTransitionTime":"2026-01-30T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.105262 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 06:22:56.732410468 +0000 UTC Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.182127 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.182184 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.182200 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.182225 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.182246 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:03Z","lastTransitionTime":"2026-01-30T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.285638 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.285712 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.285736 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.285769 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.285790 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:03Z","lastTransitionTime":"2026-01-30T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.388563 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.388664 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.388720 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.388749 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.388767 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:03Z","lastTransitionTime":"2026-01-30T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.399173 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/1.log" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.411991 4875 scope.go:117] "RemoveContainer" containerID="ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.411791 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" event={"ID":"92a13cd1-8c0d-4eab-b29c-5fe6d1598629","Type":"ContainerStarted","Data":"a854cf89a4118836f05e415d37e57b0c12504d85980c3f2230f72fcbdd381432"} Jan 30 16:57:03 crc kubenswrapper[4875]: E0130 16:57:03.412926 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.430384 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.449473 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.465150 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.486988 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.492335 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.492384 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.492397 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.492418 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.492431 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:03Z","lastTransitionTime":"2026-01-30T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.504216 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.522147 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.538869 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.556279 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.570647 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.585341 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.596032 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.596539 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.596720 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.596899 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.597071 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:03Z","lastTransitionTime":"2026-01-30T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.601165 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.620345 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.636288 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.650573 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.667254 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.690011 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"rnal_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:57:02.225373 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.700132 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.700356 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.700440 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.700517 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.700620 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:03Z","lastTransitionTime":"2026-01-30T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.806080 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.806125 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.806137 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.806153 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.806170 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:03Z","lastTransitionTime":"2026-01-30T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.909108 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.909518 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.909610 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.909694 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:03 crc kubenswrapper[4875]: I0130 16:57:03.909770 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:03Z","lastTransitionTime":"2026-01-30T16:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.013566 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.014260 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.014350 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.014429 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.014521 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:04Z","lastTransitionTime":"2026-01-30T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.106129 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 20:14:58.921546301 +0000 UTC Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.116810 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.116846 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.116859 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.116875 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.116885 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:04Z","lastTransitionTime":"2026-01-30T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.137844 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.137917 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.137981 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:04 crc kubenswrapper[4875]: E0130 16:57:04.138099 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:04 crc kubenswrapper[4875]: E0130 16:57:04.138232 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:04 crc kubenswrapper[4875]: E0130 16:57:04.138318 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.219477 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.219579 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.219616 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.219638 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.219653 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:04Z","lastTransitionTime":"2026-01-30T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.322320 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.322367 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.322378 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.322395 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.322404 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:04Z","lastTransitionTime":"2026-01-30T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.417259 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" event={"ID":"92a13cd1-8c0d-4eab-b29c-5fe6d1598629","Type":"ContainerStarted","Data":"5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d"} Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.417312 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" event={"ID":"92a13cd1-8c0d-4eab-b29c-5fe6d1598629","Type":"ContainerStarted","Data":"2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582"} Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.424980 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.425019 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.425034 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.425065 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.425080 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:04Z","lastTransitionTime":"2026-01-30T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.449663 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.474319 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.490774 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.508241 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.526852 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.528995 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.529144 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.529455 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.529739 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.530176 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:04Z","lastTransitionTime":"2026-01-30T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.553274 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.554555 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-ptnnq"] Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.555535 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:04 crc kubenswrapper[4875]: E0130 16:57:04.555695 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.571859 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.593241 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.613013 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.615791 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.615885 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2fpn\" (UniqueName: \"kubernetes.io/projected/64282947-3e36-453a-b460-ada872b157c9-kube-api-access-q2fpn\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.631221 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.634289 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.634348 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.634367 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.634394 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.634415 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:04Z","lastTransitionTime":"2026-01-30T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.649244 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.662916 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.684197 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.703903 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"rnal_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:57:02.225373 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.717028 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2fpn\" (UniqueName: \"kubernetes.io/projected/64282947-3e36-453a-b460-ada872b157c9-kube-api-access-q2fpn\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.717214 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:04 crc kubenswrapper[4875]: E0130 16:57:04.717401 4875 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:04 crc kubenswrapper[4875]: E0130 16:57:04.717501 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs podName:64282947-3e36-453a-b460-ada872b157c9 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:05.217478524 +0000 UTC m=+35.764841907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs") pod "network-metrics-daemon-ptnnq" (UID: "64282947-3e36-453a-b460-ada872b157c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.723021 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.736542 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.736602 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.736614 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.736633 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.736643 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:04Z","lastTransitionTime":"2026-01-30T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.739263 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.743009 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2fpn\" (UniqueName: \"kubernetes.io/projected/64282947-3e36-453a-b460-ada872b157c9-kube-api-access-q2fpn\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.759198 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.773847 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.785878 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.802414 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.827312 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"rnal_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:57:02.225373 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.840467 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.840473 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.840513 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.840721 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.840765 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.840781 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:04Z","lastTransitionTime":"2026-01-30T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.856442 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.883448 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.899539 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.918499 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.934204 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.943448 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.943500 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.943513 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.943536 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.943550 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:04Z","lastTransitionTime":"2026-01-30T16:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.948642 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.963860 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.980056 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:04 crc kubenswrapper[4875]: I0130 16:57:04.992495 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.008788 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:05Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.022753 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:05Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.046350 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.046403 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.046418 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.046440 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.046460 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:05Z","lastTransitionTime":"2026-01-30T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.107227 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:33:19.153501573 +0000 UTC Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.149451 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.149526 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.149547 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.149571 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.149633 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:05Z","lastTransitionTime":"2026-01-30T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.225051 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.225352 4875 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.225499 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs podName:64282947-3e36-453a-b460-ada872b157c9 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:06.225457791 +0000 UTC m=+36.772821224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs") pod "network-metrics-daemon-ptnnq" (UID: "64282947-3e36-453a-b460-ada872b157c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.253945 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.254017 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.254037 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.254064 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.254083 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:05Z","lastTransitionTime":"2026-01-30T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.356743 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.356805 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.356815 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.356835 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.356851 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:05Z","lastTransitionTime":"2026-01-30T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.459544 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.459661 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.459686 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.459718 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.459744 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:05Z","lastTransitionTime":"2026-01-30T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.563539 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.563661 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.563683 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.563712 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.563730 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:05Z","lastTransitionTime":"2026-01-30T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.666646 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.667060 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.667392 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.667768 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.668085 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:05Z","lastTransitionTime":"2026-01-30T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.729241 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.729696 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:57:21.729655518 +0000 UTC m=+52.277018951 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.730145 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.730422 4875 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.730792 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:21.730771422 +0000 UTC m=+52.278134845 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.771877 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.772343 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.772560 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.772809 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.772989 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:05Z","lastTransitionTime":"2026-01-30T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.832141 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.832468 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.832638 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.832671 4875 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.832811 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.832914 4875 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.833060 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:21.833034587 +0000 UTC m=+52.380397980 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.832944 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.833219 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:21.833205922 +0000 UTC m=+52.380569325 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.833225 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.833390 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.833465 4875 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:57:05 crc kubenswrapper[4875]: E0130 16:57:05.833829 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:21.83380321 +0000 UTC m=+52.381166633 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.876405 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.876479 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.876496 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.876523 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.876569 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:05Z","lastTransitionTime":"2026-01-30T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.979685 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.979743 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.979758 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.979779 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:05 crc kubenswrapper[4875]: I0130 16:57:05.979795 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:05Z","lastTransitionTime":"2026-01-30T16:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.082293 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.082369 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.082383 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.082400 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.082412 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.107790 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:15:30.719272415 +0000 UTC Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.135212 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:06 crc kubenswrapper[4875]: E0130 16:57:06.135388 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.135700 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.135996 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.135213 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:06 crc kubenswrapper[4875]: E0130 16:57:06.136107 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:06 crc kubenswrapper[4875]: E0130 16:57:06.136426 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:06 crc kubenswrapper[4875]: E0130 16:57:06.136772 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.184988 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.185351 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.185476 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.185564 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.185702 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.239015 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:06 crc kubenswrapper[4875]: E0130 16:57:06.239199 4875 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:06 crc kubenswrapper[4875]: E0130 16:57:06.239267 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs podName:64282947-3e36-453a-b460-ada872b157c9 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:08.239250534 +0000 UTC m=+38.786613927 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs") pod "network-metrics-daemon-ptnnq" (UID: "64282947-3e36-453a-b460-ada872b157c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.289083 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.289155 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.289177 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.289207 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.289228 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.392874 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.392945 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.392970 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.393009 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.393033 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.496784 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.496849 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.496874 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.496912 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.496935 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.600413 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.600489 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.600514 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.600549 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.600574 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.703809 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.703885 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.703909 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.703941 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.703964 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.735629 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.735688 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.735713 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.735739 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.735759 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: E0130 16:57:06.758330 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.763572 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.763643 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.763654 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.763673 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.763685 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: E0130 16:57:06.784202 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.790231 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.790356 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.790377 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.790411 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.790432 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: E0130 16:57:06.814745 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.825471 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.825555 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.825576 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.825641 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.825665 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: E0130 16:57:06.847111 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.852658 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.852702 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.852720 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.852743 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.852761 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: E0130 16:57:06.875775 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:06Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:06 crc kubenswrapper[4875]: E0130 16:57:06.875999 4875 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.878436 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.878679 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.878852 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.879024 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.879187 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.982227 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.982300 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.982319 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.982348 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:06 crc kubenswrapper[4875]: I0130 16:57:06.982368 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:06Z","lastTransitionTime":"2026-01-30T16:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.085724 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.085755 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.085766 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.085783 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.085795 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:07Z","lastTransitionTime":"2026-01-30T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.109545 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:15:02.30457295 +0000 UTC Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.188755 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.188812 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.188829 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.188855 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.188874 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:07Z","lastTransitionTime":"2026-01-30T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.291639 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.291686 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.291696 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.291714 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.291729 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:07Z","lastTransitionTime":"2026-01-30T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.394833 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.394905 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.395003 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.395024 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.395038 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:07Z","lastTransitionTime":"2026-01-30T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.498978 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.499043 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.499061 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.499088 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.499107 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:07Z","lastTransitionTime":"2026-01-30T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.602146 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.602222 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.602243 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.602275 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.602296 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:07Z","lastTransitionTime":"2026-01-30T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.705737 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.705791 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.705804 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.705828 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.705849 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:07Z","lastTransitionTime":"2026-01-30T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.808792 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.808838 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.808851 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.808881 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.808894 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:07Z","lastTransitionTime":"2026-01-30T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.912448 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.912502 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.912523 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.912541 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:07 crc kubenswrapper[4875]: I0130 16:57:07.912553 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:07Z","lastTransitionTime":"2026-01-30T16:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.015912 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.015984 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.015999 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.016023 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.016039 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:08Z","lastTransitionTime":"2026-01-30T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.110216 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:36:56.420422933 +0000 UTC Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.118301 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.118342 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.118355 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.118376 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.118389 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:08Z","lastTransitionTime":"2026-01-30T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.135772 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.135772 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.135891 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:08 crc kubenswrapper[4875]: E0130 16:57:08.136058 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.136094 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:08 crc kubenswrapper[4875]: E0130 16:57:08.136175 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:08 crc kubenswrapper[4875]: E0130 16:57:08.136273 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:08 crc kubenswrapper[4875]: E0130 16:57:08.136374 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.222304 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.222342 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.222355 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.222375 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.222387 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:08Z","lastTransitionTime":"2026-01-30T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.264655 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:08 crc kubenswrapper[4875]: E0130 16:57:08.264821 4875 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:08 crc kubenswrapper[4875]: E0130 16:57:08.264870 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs podName:64282947-3e36-453a-b460-ada872b157c9 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:12.264854799 +0000 UTC m=+42.812218182 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs") pod "network-metrics-daemon-ptnnq" (UID: "64282947-3e36-453a-b460-ada872b157c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.324742 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.324814 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.324831 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.324855 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.324871 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:08Z","lastTransitionTime":"2026-01-30T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.427190 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.427237 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.427248 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.427266 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.427277 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:08Z","lastTransitionTime":"2026-01-30T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.530343 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.530400 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.530409 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.530431 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.530442 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:08Z","lastTransitionTime":"2026-01-30T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.633419 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.633473 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.633485 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.633503 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.633519 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:08Z","lastTransitionTime":"2026-01-30T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.737116 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.737192 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.737210 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.737237 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.737257 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:08Z","lastTransitionTime":"2026-01-30T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.840318 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.840391 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.840413 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.840439 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.840457 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:08Z","lastTransitionTime":"2026-01-30T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.943219 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.943274 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.943290 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.943338 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:08 crc kubenswrapper[4875]: I0130 16:57:08.943357 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:08Z","lastTransitionTime":"2026-01-30T16:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.046029 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.046088 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.046107 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.046132 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.046150 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:09Z","lastTransitionTime":"2026-01-30T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.107362 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.110558 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:08:49.916021306 +0000 UTC Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.145922 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"rnal_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:57:02.225373 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.149104 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.149258 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.149324 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.149393 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.149459 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:09Z","lastTransitionTime":"2026-01-30T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.166385 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.182576 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.199129 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.221747 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.242997 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.252857 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.252998 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.253103 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.253178 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.253237 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:09Z","lastTransitionTime":"2026-01-30T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.264971 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.283707 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.301611 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.320086 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.343087 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.356052 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.356112 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.356124 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.356146 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.356159 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:09Z","lastTransitionTime":"2026-01-30T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.360193 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.375743 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.392555 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.408606 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.432094 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.448864 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:09Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.459175 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.459263 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.459284 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.459316 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.459337 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:09Z","lastTransitionTime":"2026-01-30T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.563142 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.563569 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.563722 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.563855 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.563967 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:09Z","lastTransitionTime":"2026-01-30T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.667716 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.667768 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.667785 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.667809 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.667826 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:09Z","lastTransitionTime":"2026-01-30T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.771329 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.771390 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.771403 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.771424 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.771439 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:09Z","lastTransitionTime":"2026-01-30T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.874398 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.874493 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.874514 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.874875 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.874934 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:09Z","lastTransitionTime":"2026-01-30T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.978826 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.978899 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.978917 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.978942 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:09 crc kubenswrapper[4875]: I0130 16:57:09.978958 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:09Z","lastTransitionTime":"2026-01-30T16:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.082495 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.082809 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.082844 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.082914 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.082939 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:10Z","lastTransitionTime":"2026-01-30T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.111046 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:17:06.770906516 +0000 UTC Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.136067 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:10 crc kubenswrapper[4875]: E0130 16:57:10.136351 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.136466 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:10 crc kubenswrapper[4875]: E0130 16:57:10.136785 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.136881 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.136921 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:10 crc kubenswrapper[4875]: E0130 16:57:10.137068 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:10 crc kubenswrapper[4875]: E0130 16:57:10.137174 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.157741 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.178576 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.186117 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.186197 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.186218 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.186249 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.186271 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:10Z","lastTransitionTime":"2026-01-30T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.195923 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.216661 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.234269 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.249117 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.275391 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.290752 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.291092 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.291188 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.291292 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.291413 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:10Z","lastTransitionTime":"2026-01-30T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.301452 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"rnal_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:57:02.225373 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.319806 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.335400 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.359485 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.375471 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.388157 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.393809 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.393862 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.393872 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.393890 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.393903 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:10Z","lastTransitionTime":"2026-01-30T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.403028 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.418646 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.435154 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.450769 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.496646 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.496712 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.496723 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.496745 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.496759 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:10Z","lastTransitionTime":"2026-01-30T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.600078 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.600235 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.600261 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.600294 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.600319 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:10Z","lastTransitionTime":"2026-01-30T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.703916 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.704471 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.704743 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.704973 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.705156 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:10Z","lastTransitionTime":"2026-01-30T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.808167 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.808223 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.808248 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.808280 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.808304 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:10Z","lastTransitionTime":"2026-01-30T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.912169 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.912231 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.912256 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.912287 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:10 crc kubenswrapper[4875]: I0130 16:57:10.912310 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:10Z","lastTransitionTime":"2026-01-30T16:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.016143 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.016212 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.016231 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.016262 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.016284 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:11Z","lastTransitionTime":"2026-01-30T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.111262 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:00:23.050038521 +0000 UTC Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.120211 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.120274 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.120288 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.120312 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.120325 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:11Z","lastTransitionTime":"2026-01-30T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.224497 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.224577 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.224627 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.224655 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.224671 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:11Z","lastTransitionTime":"2026-01-30T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.328149 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.328199 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.328210 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.328225 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.328235 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:11Z","lastTransitionTime":"2026-01-30T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.431037 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.431076 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.431085 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.431100 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.431111 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:11Z","lastTransitionTime":"2026-01-30T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.534772 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.535165 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.535358 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.535492 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.535646 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:11Z","lastTransitionTime":"2026-01-30T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.638383 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.638427 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.638439 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.638485 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.638499 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:11Z","lastTransitionTime":"2026-01-30T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.741626 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.741689 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.741705 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.741725 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.741738 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:11Z","lastTransitionTime":"2026-01-30T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.844636 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.844947 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.845011 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.845084 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.845199 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:11Z","lastTransitionTime":"2026-01-30T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.949029 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.949106 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.949131 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.949162 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:11 crc kubenswrapper[4875]: I0130 16:57:11.949183 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:11Z","lastTransitionTime":"2026-01-30T16:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.052807 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.052882 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.052896 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.052920 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.052937 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:12Z","lastTransitionTime":"2026-01-30T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.111548 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:31:03.473358342 +0000 UTC Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.135803 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:12 crc kubenswrapper[4875]: E0130 16:57:12.135942 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.136567 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:12 crc kubenswrapper[4875]: E0130 16:57:12.136650 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.136685 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.136764 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:12 crc kubenswrapper[4875]: E0130 16:57:12.136787 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:12 crc kubenswrapper[4875]: E0130 16:57:12.137086 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.155286 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.155330 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.155343 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.155360 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.155372 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:12Z","lastTransitionTime":"2026-01-30T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.258974 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.259049 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.259067 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.259089 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.259106 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:12Z","lastTransitionTime":"2026-01-30T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.315322 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:12 crc kubenswrapper[4875]: E0130 16:57:12.315607 4875 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:12 crc kubenswrapper[4875]: E0130 16:57:12.315726 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs podName:64282947-3e36-453a-b460-ada872b157c9 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:20.315695936 +0000 UTC m=+50.863059340 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs") pod "network-metrics-daemon-ptnnq" (UID: "64282947-3e36-453a-b460-ada872b157c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.361681 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.361772 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.361796 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.361828 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.361848 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:12Z","lastTransitionTime":"2026-01-30T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.464728 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.464783 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.464797 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.464816 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.464827 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:12Z","lastTransitionTime":"2026-01-30T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.568109 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.568172 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.568184 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.568207 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.568221 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:12Z","lastTransitionTime":"2026-01-30T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.672013 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.672091 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.672118 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.672151 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.672172 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:12Z","lastTransitionTime":"2026-01-30T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.775872 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.775943 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.775973 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.776008 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.776038 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:12Z","lastTransitionTime":"2026-01-30T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.881287 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.881392 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.881420 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.881459 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.881490 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:12Z","lastTransitionTime":"2026-01-30T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.985995 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.986080 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.986103 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.986160 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:12 crc kubenswrapper[4875]: I0130 16:57:12.986180 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:12Z","lastTransitionTime":"2026-01-30T16:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.090387 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.090449 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.090472 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.090492 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.090506 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:13Z","lastTransitionTime":"2026-01-30T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.112176 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:59:30.31609232 +0000 UTC Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.194219 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.194296 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.194316 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.194342 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.194361 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:13Z","lastTransitionTime":"2026-01-30T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.297468 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.297539 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.297558 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.297616 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.297683 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:13Z","lastTransitionTime":"2026-01-30T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.401882 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.401949 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.401974 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.402005 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.402026 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:13Z","lastTransitionTime":"2026-01-30T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.504923 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.504961 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.504971 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.504987 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.504996 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:13Z","lastTransitionTime":"2026-01-30T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.609209 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.609292 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.609308 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.609334 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.609352 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:13Z","lastTransitionTime":"2026-01-30T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.712678 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.712762 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.712782 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.712813 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.712836 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:13Z","lastTransitionTime":"2026-01-30T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.815502 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.815556 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.815571 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.815613 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.815631 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:13Z","lastTransitionTime":"2026-01-30T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.919132 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.919185 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.919197 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.919216 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:13 crc kubenswrapper[4875]: I0130 16:57:13.919229 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:13Z","lastTransitionTime":"2026-01-30T16:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.023330 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.023398 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.023419 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.023448 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.023468 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:14Z","lastTransitionTime":"2026-01-30T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.112913 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:44:46.019164411 +0000 UTC Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.126802 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.126888 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.126908 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.126947 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.126979 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:14Z","lastTransitionTime":"2026-01-30T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.135102 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.135152 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.135304 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:14 crc kubenswrapper[4875]: E0130 16:57:14.135301 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:14 crc kubenswrapper[4875]: E0130 16:57:14.135456 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:14 crc kubenswrapper[4875]: E0130 16:57:14.135576 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.135815 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:14 crc kubenswrapper[4875]: E0130 16:57:14.135928 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.230130 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.230293 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.230322 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.230392 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.230417 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:14Z","lastTransitionTime":"2026-01-30T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.334365 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.334408 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.334420 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.334439 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.334451 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:14Z","lastTransitionTime":"2026-01-30T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.437149 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.437228 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.437250 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.437285 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.437309 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:14Z","lastTransitionTime":"2026-01-30T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.540203 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.540244 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.540253 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.540268 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.540277 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:14Z","lastTransitionTime":"2026-01-30T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.642564 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.642623 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.642635 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.642652 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.642667 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:14Z","lastTransitionTime":"2026-01-30T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.745528 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.745609 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.745623 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.745641 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.745651 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:14Z","lastTransitionTime":"2026-01-30T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.849288 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.849379 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.849400 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.849434 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.849455 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:14Z","lastTransitionTime":"2026-01-30T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.957544 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.957658 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.957677 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.957718 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:14 crc kubenswrapper[4875]: I0130 16:57:14.957741 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:14Z","lastTransitionTime":"2026-01-30T16:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.061649 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.061706 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.061722 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.061746 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.061766 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:15Z","lastTransitionTime":"2026-01-30T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.113534 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:07:41.944418348 +0000 UTC Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.165202 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.165285 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.165314 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.165344 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.165365 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:15Z","lastTransitionTime":"2026-01-30T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.268592 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.268650 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.268662 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.268684 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.268697 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:15Z","lastTransitionTime":"2026-01-30T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.340776 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.342827 4875 scope.go:117] "RemoveContainer" containerID="ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.372714 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.373186 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.373206 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.373245 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.373263 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:15Z","lastTransitionTime":"2026-01-30T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.476157 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.476190 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.476198 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.476212 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.476221 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:15Z","lastTransitionTime":"2026-01-30T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.579311 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.579360 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.579370 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.579386 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.579415 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:15Z","lastTransitionTime":"2026-01-30T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.681675 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.681727 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.681737 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.681757 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.681768 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:15Z","lastTransitionTime":"2026-01-30T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.784752 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.784780 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.784789 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.784818 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.784827 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:15Z","lastTransitionTime":"2026-01-30T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.886944 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.886995 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.887008 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.887050 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.887063 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:15Z","lastTransitionTime":"2026-01-30T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.989671 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.989727 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.989740 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.989759 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:15 crc kubenswrapper[4875]: I0130 16:57:15.989771 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:15Z","lastTransitionTime":"2026-01-30T16:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.092294 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.092354 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.092368 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.092385 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.092396 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:16Z","lastTransitionTime":"2026-01-30T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.113753 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 14:59:22.560457677 +0000 UTC Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.135238 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.135279 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.135332 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.135398 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:16 crc kubenswrapper[4875]: E0130 16:57:16.135399 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:16 crc kubenswrapper[4875]: E0130 16:57:16.135509 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:16 crc kubenswrapper[4875]: E0130 16:57:16.135743 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:16 crc kubenswrapper[4875]: E0130 16:57:16.136011 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.195248 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.195311 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.195329 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.195354 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.195372 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:16Z","lastTransitionTime":"2026-01-30T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.298318 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.298383 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.298403 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.298427 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.298444 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:16Z","lastTransitionTime":"2026-01-30T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.400728 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.400766 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.400775 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.400792 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.400801 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:16Z","lastTransitionTime":"2026-01-30T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.466611 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/2.log" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.467505 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/1.log" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.469940 4875 generic.go:334] "Generic (PLEG): container finished" podID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerID="d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272" exitCode=1 Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.469977 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272"} Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.470012 4875 scope.go:117] "RemoveContainer" containerID="ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.471340 4875 scope.go:117] "RemoveContainer" containerID="d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272" Jan 30 16:57:16 crc kubenswrapper[4875]: E0130 16:57:16.471703 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.485006 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.500197 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.504805 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.504981 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.505086 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.505215 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.505332 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:16Z","lastTransitionTime":"2026-01-30T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.512991 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.528753 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.541248 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.556792 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.571674 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.584032 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.598858 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.608481 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.608639 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.608725 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.608832 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.608924 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:16Z","lastTransitionTime":"2026-01-30T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.618121 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"rnal_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:57:02.225373 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:16Z\\\",\\\"message\\\":\\\":29103\\\\\\\"\\\\nI0130 16:57:16.186196 6505 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:57:16.186232 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ck4hq\\\\nF0130 16:57:16.186242 6505 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set nod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.630088 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.640618 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.652996 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.671782 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.685325 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.699165 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.710167 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:16Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.711149 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.711182 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.711193 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.711213 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.711225 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:16Z","lastTransitionTime":"2026-01-30T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.814058 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.814375 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.814452 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.814531 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.814616 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:16Z","lastTransitionTime":"2026-01-30T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.917450 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.917953 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.918190 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.918427 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:16 crc kubenswrapper[4875]: I0130 16:57:16.918677 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:16Z","lastTransitionTime":"2026-01-30T16:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.022301 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.022340 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.022354 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.022374 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.022386 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.114497 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 05:07:26.404302951 +0000 UTC Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.130949 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.131025 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.131048 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.131080 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.131102 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.213206 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.213262 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.213275 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.213296 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.213310 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: E0130 16:57:17.234643 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.240074 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.240147 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.240169 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.240193 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.240210 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: E0130 16:57:17.258881 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.263400 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.263428 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.263437 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.263450 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.263459 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: E0130 16:57:17.279734 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.283309 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.283340 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.283351 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.283367 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.283378 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: E0130 16:57:17.298357 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.302873 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.303165 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.303264 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.303362 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.303450 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: E0130 16:57:17.322467 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:17Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:17 crc kubenswrapper[4875]: E0130 16:57:17.322715 4875 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.324929 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.324981 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.324994 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.325013 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.325029 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.427630 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.427978 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.428064 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.428325 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.428437 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.474073 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/2.log" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.530709 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.531018 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.531234 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.531366 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.531434 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.634679 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.634729 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.634737 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.634757 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.634767 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.737445 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.737486 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.737496 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.737515 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.737524 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.839866 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.839906 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.839917 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.839934 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.839945 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.942082 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.942367 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.942548 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.942802 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:17 crc kubenswrapper[4875]: I0130 16:57:17.942986 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:17Z","lastTransitionTime":"2026-01-30T16:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.045767 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.046077 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.046201 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.046289 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.046364 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:18Z","lastTransitionTime":"2026-01-30T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.116340 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:50:48.953273996 +0000 UTC Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.135686 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.135772 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.135686 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:18 crc kubenswrapper[4875]: E0130 16:57:18.135946 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:18 crc kubenswrapper[4875]: E0130 16:57:18.135965 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:18 crc kubenswrapper[4875]: E0130 16:57:18.136048 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.136685 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:18 crc kubenswrapper[4875]: E0130 16:57:18.136879 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.148144 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.148370 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.148440 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.148501 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.148557 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:18Z","lastTransitionTime":"2026-01-30T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.251809 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.251873 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.251894 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.251920 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.251938 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:18Z","lastTransitionTime":"2026-01-30T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.354542 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.355447 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.355538 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.355629 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.355697 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:18Z","lastTransitionTime":"2026-01-30T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.458665 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.458744 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.458763 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.458786 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.458803 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:18Z","lastTransitionTime":"2026-01-30T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.562445 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.562538 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.562561 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.562663 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.562705 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:18Z","lastTransitionTime":"2026-01-30T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.666479 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.666576 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.666626 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.666658 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.666692 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:18Z","lastTransitionTime":"2026-01-30T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.769520 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.769627 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.769648 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.770110 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.770161 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:18Z","lastTransitionTime":"2026-01-30T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.875673 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.875774 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.875788 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.876215 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.876263 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:18Z","lastTransitionTime":"2026-01-30T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.980265 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.980372 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.980398 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.980433 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:18 crc kubenswrapper[4875]: I0130 16:57:18.980456 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:18Z","lastTransitionTime":"2026-01-30T16:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.084170 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.084226 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.084241 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.084262 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.084275 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:19Z","lastTransitionTime":"2026-01-30T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.117510 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:10:11.236893043 +0000 UTC Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.187974 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.188027 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.188041 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.188061 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.188074 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:19Z","lastTransitionTime":"2026-01-30T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.291113 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.291164 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.291179 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.291204 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.291221 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:19Z","lastTransitionTime":"2026-01-30T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.395450 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.395520 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.395538 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.395570 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.395641 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:19Z","lastTransitionTime":"2026-01-30T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.499374 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.499432 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.499448 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.499470 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.499487 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:19Z","lastTransitionTime":"2026-01-30T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.601907 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.601993 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.602018 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.602051 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.602073 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:19Z","lastTransitionTime":"2026-01-30T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.705926 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.705968 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.705978 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.705994 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.706003 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:19Z","lastTransitionTime":"2026-01-30T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.808624 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.808702 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.808719 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.808747 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.808767 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:19Z","lastTransitionTime":"2026-01-30T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.911652 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.911688 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.911695 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.911710 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:19 crc kubenswrapper[4875]: I0130 16:57:19.911719 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:19Z","lastTransitionTime":"2026-01-30T16:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.015428 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.015495 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.015513 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.015537 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.015558 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:20Z","lastTransitionTime":"2026-01-30T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.118003 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:01:35.082171181 +0000 UTC Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.119189 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.119674 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.119861 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.120028 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.120174 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:20Z","lastTransitionTime":"2026-01-30T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.135746 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.135880 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:20 crc kubenswrapper[4875]: E0130 16:57:20.135963 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.135752 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:20 crc kubenswrapper[4875]: E0130 16:57:20.136102 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:20 crc kubenswrapper[4875]: E0130 16:57:20.136323 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.137038 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:20 crc kubenswrapper[4875]: E0130 16:57:20.140742 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.160144 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.188219 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.211810 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.224243 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.224317 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.224340 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.224371 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.224395 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:20Z","lastTransitionTime":"2026-01-30T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.227903 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.250431 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.269921 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.283336 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.297954 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.316220 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.328373 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.328493 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.328549 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.328625 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.328648 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:20Z","lastTransitionTime":"2026-01-30T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.331159 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.349387 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.365693 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.392646 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.409554 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:20 crc kubenswrapper[4875]: E0130 16:57:20.409809 4875 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:20 crc kubenswrapper[4875]: E0130 16:57:20.409963 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs podName:64282947-3e36-453a-b460-ada872b157c9 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:36.409922897 +0000 UTC m=+66.957286470 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs") pod "network-metrics-daemon-ptnnq" (UID: "64282947-3e36-453a-b460-ada872b157c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.412881 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.426390 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.437404 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.437454 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.437464 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.437486 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.437497 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:20Z","lastTransitionTime":"2026-01-30T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.449844 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.468662 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"rnal_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:57:02.225373 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:16Z\\\",\\\"message\\\":\\\":29103\\\\\\\"\\\\nI0130 16:57:16.186196 6505 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:57:16.186232 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ck4hq\\\\nF0130 16:57:16.186242 6505 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set nod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.540061 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.540226 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.540321 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.540395 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.540472 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:20Z","lastTransitionTime":"2026-01-30T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.643450 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.643520 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.643538 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.643565 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.643625 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:20Z","lastTransitionTime":"2026-01-30T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.747561 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.748376 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.748668 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.748943 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.749193 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:20Z","lastTransitionTime":"2026-01-30T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.853365 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.853436 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.853454 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.853480 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.853502 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:20Z","lastTransitionTime":"2026-01-30T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.957114 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.957199 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.957225 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.957273 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:20 crc kubenswrapper[4875]: I0130 16:57:20.957304 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:20Z","lastTransitionTime":"2026-01-30T16:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.060041 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.060452 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.060558 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.060692 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.060789 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:21Z","lastTransitionTime":"2026-01-30T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.120572 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:27:18.623956045 +0000 UTC Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.164316 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.164386 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.164408 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.164438 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.164457 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:21Z","lastTransitionTime":"2026-01-30T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.267936 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.268025 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.268051 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.268085 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.268112 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:21Z","lastTransitionTime":"2026-01-30T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.372443 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.372511 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.372523 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.372605 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.372617 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:21Z","lastTransitionTime":"2026-01-30T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.475413 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.475454 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.475465 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.475502 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.475515 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:21Z","lastTransitionTime":"2026-01-30T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.578921 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.579018 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.579044 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.579077 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.579099 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:21Z","lastTransitionTime":"2026-01-30T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.682028 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.682074 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.682083 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.682099 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.682109 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:21Z","lastTransitionTime":"2026-01-30T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.785575 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.785972 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.786074 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.786168 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.786320 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:21Z","lastTransitionTime":"2026-01-30T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.824914 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.825124 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:21 crc kubenswrapper[4875]: E0130 16:57:21.825227 4875 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:57:21 crc kubenswrapper[4875]: E0130 16:57:21.825296 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:53.825279991 +0000 UTC m=+84.372643384 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:57:21 crc kubenswrapper[4875]: E0130 16:57:21.825535 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:57:53.825525439 +0000 UTC m=+84.372888822 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.889388 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.889421 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.889431 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.889445 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.889456 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:21Z","lastTransitionTime":"2026-01-30T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.926152 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.926208 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.926235 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:21 crc kubenswrapper[4875]: E0130 16:57:21.926323 4875 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:57:21 crc kubenswrapper[4875]: E0130 16:57:21.926324 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:57:21 crc kubenswrapper[4875]: E0130 16:57:21.926346 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:57:21 crc kubenswrapper[4875]: E0130 16:57:21.926360 4875 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:57:21 crc kubenswrapper[4875]: E0130 16:57:21.926366 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:53.926353258 +0000 UTC m=+84.473716641 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:57:21 crc kubenswrapper[4875]: E0130 16:57:21.926392 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:53.926383329 +0000 UTC m=+84.473746712 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:57:21 crc kubenswrapper[4875]: E0130 16:57:21.926666 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:57:21 crc kubenswrapper[4875]: E0130 16:57:21.926738 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:57:21 crc kubenswrapper[4875]: E0130 16:57:21.926768 4875 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:57:21 crc kubenswrapper[4875]: E0130 16:57:21.926890 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:57:53.926850044 +0000 UTC m=+84.474213457 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.991980 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.992059 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.992077 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.992110 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:21 crc kubenswrapper[4875]: I0130 16:57:21.992131 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:21Z","lastTransitionTime":"2026-01-30T16:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.095055 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.095107 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.095119 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.095139 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.095153 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:22Z","lastTransitionTime":"2026-01-30T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.121193 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 20:32:52.404209632 +0000 UTC Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.135136 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.135159 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.135237 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.135287 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:22 crc kubenswrapper[4875]: E0130 16:57:22.135401 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:22 crc kubenswrapper[4875]: E0130 16:57:22.135522 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:22 crc kubenswrapper[4875]: E0130 16:57:22.135645 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:22 crc kubenswrapper[4875]: E0130 16:57:22.135792 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.198369 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.198405 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.198414 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.198436 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.198447 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:22Z","lastTransitionTime":"2026-01-30T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.300569 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.301125 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.301311 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.301419 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.301659 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:22Z","lastTransitionTime":"2026-01-30T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.404063 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.404619 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.404794 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.404960 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.405120 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:22Z","lastTransitionTime":"2026-01-30T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.511113 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.511169 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.511180 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.511199 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.511212 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:22Z","lastTransitionTime":"2026-01-30T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.615938 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.615975 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.615986 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.616005 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.616017 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:22Z","lastTransitionTime":"2026-01-30T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.719516 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.720064 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.720243 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.720396 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.720556 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:22Z","lastTransitionTime":"2026-01-30T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.824104 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.824168 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.824185 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.824205 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.824220 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:22Z","lastTransitionTime":"2026-01-30T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.927101 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.927500 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.927653 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.927829 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:22 crc kubenswrapper[4875]: I0130 16:57:22.927943 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:22Z","lastTransitionTime":"2026-01-30T16:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.031567 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.032116 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.032342 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.032485 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.032968 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:23Z","lastTransitionTime":"2026-01-30T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.122271 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:32:14.869224754 +0000 UTC Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.138332 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.138833 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.138968 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.139103 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.139239 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:23Z","lastTransitionTime":"2026-01-30T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.242082 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.242143 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.242162 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.242188 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.242208 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:23Z","lastTransitionTime":"2026-01-30T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.345811 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.346211 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.346340 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.346544 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.346759 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:23Z","lastTransitionTime":"2026-01-30T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.450700 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.450812 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.450832 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.450862 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.450887 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:23Z","lastTransitionTime":"2026-01-30T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.554397 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.554458 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.554472 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.554496 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.554514 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:23Z","lastTransitionTime":"2026-01-30T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.657756 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.657816 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.657837 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.657865 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.657889 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:23Z","lastTransitionTime":"2026-01-30T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.761973 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.762032 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.762050 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.762077 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.762096 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:23Z","lastTransitionTime":"2026-01-30T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.865842 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.865916 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.865934 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.865963 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.865995 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:23Z","lastTransitionTime":"2026-01-30T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.969395 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.969483 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.969514 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.969552 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:23 crc kubenswrapper[4875]: I0130 16:57:23.969577 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:23Z","lastTransitionTime":"2026-01-30T16:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.073552 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.073625 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.073638 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.073659 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.073672 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:24Z","lastTransitionTime":"2026-01-30T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.123443 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:53:47.408609596 +0000 UTC Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.136221 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.136351 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.136436 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.136220 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:24 crc kubenswrapper[4875]: E0130 16:57:24.136565 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:24 crc kubenswrapper[4875]: E0130 16:57:24.136426 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:24 crc kubenswrapper[4875]: E0130 16:57:24.136887 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:24 crc kubenswrapper[4875]: E0130 16:57:24.137102 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.177177 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.177255 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.177268 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.177296 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.177311 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:24Z","lastTransitionTime":"2026-01-30T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.280183 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.280291 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.280303 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.280323 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.280335 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:24Z","lastTransitionTime":"2026-01-30T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.390019 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.390067 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.390079 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.390099 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.390111 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:24Z","lastTransitionTime":"2026-01-30T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.493460 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.493529 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.493547 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.493578 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.493633 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:24Z","lastTransitionTime":"2026-01-30T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.596664 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.597191 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.597409 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.597657 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.597873 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:24Z","lastTransitionTime":"2026-01-30T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.701857 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.701919 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.701931 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.701955 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.701971 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:24Z","lastTransitionTime":"2026-01-30T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.804844 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.804890 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.804902 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.804922 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.804935 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:24Z","lastTransitionTime":"2026-01-30T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.907844 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.907917 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.907935 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.907968 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:24 crc kubenswrapper[4875]: I0130 16:57:24.907989 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:24Z","lastTransitionTime":"2026-01-30T16:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.010953 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.011015 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.011032 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.011057 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.011075 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:25Z","lastTransitionTime":"2026-01-30T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.114749 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.114796 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.114808 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.114831 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.114843 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:25Z","lastTransitionTime":"2026-01-30T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.123624 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:17:56.519546968 +0000 UTC Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.217488 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.217547 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.217560 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.217579 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.217614 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:25Z","lastTransitionTime":"2026-01-30T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.322662 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.323153 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.323315 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.323466 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.323638 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:25Z","lastTransitionTime":"2026-01-30T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.427271 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.427726 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.427943 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.428119 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.428263 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:25Z","lastTransitionTime":"2026-01-30T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.531628 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.532032 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.532119 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.532199 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.532263 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:25Z","lastTransitionTime":"2026-01-30T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.635627 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.635691 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.635710 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.635734 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.635750 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:25Z","lastTransitionTime":"2026-01-30T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.739249 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.739312 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.739328 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.739350 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.739363 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:25Z","lastTransitionTime":"2026-01-30T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.843422 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.843458 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.843466 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.843480 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.843491 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:25Z","lastTransitionTime":"2026-01-30T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.916172 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.932036 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.934338 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.947062 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.947377 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.947962 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.948177 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.948320 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:25Z","lastTransitionTime":"2026-01-30T16:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.955563 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.972162 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:25 crc kubenswrapper[4875]: I0130 16:57:25.989148 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:25Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.002205 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.024514 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.041701 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.052364 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.052410 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.052423 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.052455 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.052466 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:26Z","lastTransitionTime":"2026-01-30T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.056575 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.075014 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.097018 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"rnal_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:57:02.225373 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:16Z\\\",\\\"message\\\":\\\":29103\\\\\\\"\\\\nI0130 16:57:16.186196 6505 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:57:16.186232 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ck4hq\\\\nF0130 16:57:16.186242 6505 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set nod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.111902 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.124514 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:35:24.398454388 +0000 UTC Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.128624 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.135152 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.135315 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:26 crc kubenswrapper[4875]: E0130 16:57:26.135407 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.135441 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.135461 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:26 crc kubenswrapper[4875]: E0130 16:57:26.135653 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:26 crc kubenswrapper[4875]: E0130 16:57:26.135880 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:26 crc kubenswrapper[4875]: E0130 16:57:26.135954 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.149569 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.154474 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.154630 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.154706 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.154775 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.154837 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:26Z","lastTransitionTime":"2026-01-30T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.173040 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.188397 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.202103 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.214805 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:26Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.257941 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.258006 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.258026 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.258057 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.258079 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:26Z","lastTransitionTime":"2026-01-30T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.362044 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.362123 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.362148 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.362184 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.362211 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:26Z","lastTransitionTime":"2026-01-30T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.465887 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.465958 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.465985 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.466016 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.466040 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:26Z","lastTransitionTime":"2026-01-30T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.569918 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.569978 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.569996 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.570025 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.570046 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:26Z","lastTransitionTime":"2026-01-30T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.673339 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.673394 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.673407 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.673429 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.673445 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:26Z","lastTransitionTime":"2026-01-30T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.776295 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.776381 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.776403 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.776432 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.776455 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:26Z","lastTransitionTime":"2026-01-30T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.880186 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.880261 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.880281 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.880313 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.880334 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:26Z","lastTransitionTime":"2026-01-30T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.982985 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.983087 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.983113 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.983184 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:26 crc kubenswrapper[4875]: I0130 16:57:26.983212 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:26Z","lastTransitionTime":"2026-01-30T16:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.086825 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.086892 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.086906 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.086926 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.086938 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.125516 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:30:57.966319883 +0000 UTC Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.189732 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.189765 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.189774 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.189790 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.189798 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.292483 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.292542 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.292555 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.292574 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.292607 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.394726 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.394779 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.394793 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.394822 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.394841 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.497657 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.497714 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.497726 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.497746 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.497758 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.600409 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.600459 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.600471 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.600491 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.600503 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.667280 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.667478 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.667507 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.667536 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.667555 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: E0130 16:57:27.690425 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.696684 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.696744 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.696763 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.696784 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.696800 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: E0130 16:57:27.715806 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.720546 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.720579 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.720611 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.720630 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.720642 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: E0130 16:57:27.738248 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.742840 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.742888 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.742905 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.742924 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.742937 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: E0130 16:57:27.756801 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.762420 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.762484 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.762501 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.762522 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.762552 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: E0130 16:57:27.782006 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:27 crc kubenswrapper[4875]: E0130 16:57:27.782244 4875 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.785195 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.785284 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.785301 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.785324 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.785372 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.889884 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.889926 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.889937 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.889958 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.889973 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.994001 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.994054 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.994068 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.994091 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:27 crc kubenswrapper[4875]: I0130 16:57:27.994153 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:27Z","lastTransitionTime":"2026-01-30T16:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.098303 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.098371 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.098392 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.098419 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.098437 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:28Z","lastTransitionTime":"2026-01-30T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.126094 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:31:37.730299768 +0000 UTC Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.135940 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.136045 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.135957 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.135946 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:28 crc kubenswrapper[4875]: E0130 16:57:28.136189 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:28 crc kubenswrapper[4875]: E0130 16:57:28.136354 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:28 crc kubenswrapper[4875]: E0130 16:57:28.136536 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:28 crc kubenswrapper[4875]: E0130 16:57:28.136690 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.201881 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.201961 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.201977 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.202031 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.202050 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:28Z","lastTransitionTime":"2026-01-30T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.305105 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.305170 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.305188 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.305214 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.305234 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:28Z","lastTransitionTime":"2026-01-30T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.408342 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.408385 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.408394 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.408409 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.408418 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:28Z","lastTransitionTime":"2026-01-30T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.511155 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.511216 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.511227 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.511246 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.511259 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:28Z","lastTransitionTime":"2026-01-30T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.613370 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.613421 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.613430 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.613447 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.613457 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:28Z","lastTransitionTime":"2026-01-30T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.716467 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.716525 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.716537 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.716557 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.716572 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:28Z","lastTransitionTime":"2026-01-30T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.819752 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.819796 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.819806 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.819821 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.819831 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:28Z","lastTransitionTime":"2026-01-30T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.923018 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.923080 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.923099 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.923210 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:28 crc kubenswrapper[4875]: I0130 16:57:28.923237 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:28Z","lastTransitionTime":"2026-01-30T16:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.026680 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.026758 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.026781 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.026812 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.026834 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:29Z","lastTransitionTime":"2026-01-30T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.126722 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:33:10.788418206 +0000 UTC Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.129841 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.129906 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.129921 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.129943 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.129959 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:29Z","lastTransitionTime":"2026-01-30T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.232874 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.232953 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.232973 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.233004 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.233027 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:29Z","lastTransitionTime":"2026-01-30T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.337377 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.337437 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.337450 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.337472 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.337485 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:29Z","lastTransitionTime":"2026-01-30T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.440514 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.440641 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.440668 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.440701 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.440730 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:29Z","lastTransitionTime":"2026-01-30T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.544119 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.544192 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.544214 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.544244 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.544270 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:29Z","lastTransitionTime":"2026-01-30T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.647687 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.647734 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.647744 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.647763 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.647775 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:29Z","lastTransitionTime":"2026-01-30T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.750860 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.750928 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.750952 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.750984 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.751008 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:29Z","lastTransitionTime":"2026-01-30T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.854208 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.854250 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.854261 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.854277 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.854289 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:29Z","lastTransitionTime":"2026-01-30T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.957161 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.957237 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.957256 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.957283 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:29 crc kubenswrapper[4875]: I0130 16:57:29.957302 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:29Z","lastTransitionTime":"2026-01-30T16:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.059448 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.059482 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.059490 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.059504 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.059516 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:30Z","lastTransitionTime":"2026-01-30T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.126841 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:13:15.359660389 +0000 UTC Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.135000 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.135138 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:30 crc kubenswrapper[4875]: E0130 16:57:30.135269 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.135492 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.135535 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:30 crc kubenswrapper[4875]: E0130 16:57:30.135622 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:30 crc kubenswrapper[4875]: E0130 16:57:30.135762 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:30 crc kubenswrapper[4875]: E0130 16:57:30.135907 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.157082 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.161812 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.161845 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.161857 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.161877 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.161890 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:30Z","lastTransitionTime":"2026-01-30T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.170112 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.186571 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.209714 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba49f4eadb564174cdb325b4036e7a9a721352cace5c212d03b8b2f4ecef11dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"message\\\":\\\"rnal_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:57:02.225373 6307 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:02Z i\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:16Z\\\",\\\"message\\\":\\\":29103\\\\\\\"\\\\nI0130 16:57:16.186196 6505 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:57:16.186232 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ck4hq\\\\nF0130 16:57:16.186242 6505 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set nod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.231369 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.246017 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"945ae17d-fe16-4501-bb14-56544b2c13c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3602451d315d0555abce0fd45866f7191ef2b169be6a2b71df9b206844d1eaa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c9696f430b3b9f427ae6573b228d01d9296814e8983dd48ade9374ab323d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e792bd5d0c930c7e45a3b73fdd1c146e50f7d686f9b7ded43e66de3547804b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.263419 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.263549 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.263630 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.263749 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.263840 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:30Z","lastTransitionTime":"2026-01-30T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.269915 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.283979 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.296723 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.309453 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.323551 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.336382 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.348939 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.360223 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.366161 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.366200 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.366210 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.366227 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.366240 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:30Z","lastTransitionTime":"2026-01-30T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.370329 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.384445 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.394817 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.407104 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.468623 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.468677 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.468685 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.468702 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.468713 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:30Z","lastTransitionTime":"2026-01-30T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.571118 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.571167 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.571179 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.571199 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.571210 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:30Z","lastTransitionTime":"2026-01-30T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.673795 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.673841 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.673856 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.673876 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.673888 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:30Z","lastTransitionTime":"2026-01-30T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.775945 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.775984 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.775995 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.776009 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.776018 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:30Z","lastTransitionTime":"2026-01-30T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.878290 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.878325 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.878334 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.878351 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.878360 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:30Z","lastTransitionTime":"2026-01-30T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.981237 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.981282 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.981290 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.981307 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:30 crc kubenswrapper[4875]: I0130 16:57:30.981316 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:30Z","lastTransitionTime":"2026-01-30T16:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.084301 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.084693 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.084830 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.084926 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.085029 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:31Z","lastTransitionTime":"2026-01-30T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.127932 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:26:35.863754928 +0000 UTC Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.188267 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.188696 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.188769 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.188840 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.188905 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:31Z","lastTransitionTime":"2026-01-30T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.291732 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.291780 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.291790 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.291837 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.291849 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:31Z","lastTransitionTime":"2026-01-30T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.394291 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.394324 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.394333 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.394347 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.394357 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:31Z","lastTransitionTime":"2026-01-30T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.498365 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.498823 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.498834 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.498854 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.498866 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:31Z","lastTransitionTime":"2026-01-30T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.601155 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.601190 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.601199 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.601219 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.601228 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:31Z","lastTransitionTime":"2026-01-30T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.704031 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.704070 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.704079 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.704094 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.704103 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:31Z","lastTransitionTime":"2026-01-30T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.807258 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.807302 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.807312 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.807327 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.807336 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:31Z","lastTransitionTime":"2026-01-30T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.909659 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.909701 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.909713 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.909734 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:31 crc kubenswrapper[4875]: I0130 16:57:31.909747 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:31Z","lastTransitionTime":"2026-01-30T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.012751 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.013092 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.013226 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.013365 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.013454 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:32Z","lastTransitionTime":"2026-01-30T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.116146 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.116436 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.116504 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.116600 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.116729 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:32Z","lastTransitionTime":"2026-01-30T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.129262 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:41:21.156947707 +0000 UTC Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.135679 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.135711 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.135721 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:32 crc kubenswrapper[4875]: E0130 16:57:32.135828 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:32 crc kubenswrapper[4875]: E0130 16:57:32.135998 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.136061 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:32 crc kubenswrapper[4875]: E0130 16:57:32.136086 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:32 crc kubenswrapper[4875]: E0130 16:57:32.136356 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.136718 4875 scope.go:117] "RemoveContainer" containerID="d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272" Jan 30 16:57:32 crc kubenswrapper[4875]: E0130 16:57:32.136986 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.163129 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:16Z\\\",\\\"message\\\":\\\":29103\\\\\\\"\\\\nI0130 16:57:16.186196 6505 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:57:16.186232 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ck4hq\\\\nF0130 16:57:16.186242 6505 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set nod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.180982 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.198361 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.213178 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.218971 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.219018 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.219028 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.219044 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.219053 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:32Z","lastTransitionTime":"2026-01-30T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.229392 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.246722 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.264357 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.278879 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.291568 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.310154 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.322639 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.322674 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.322683 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.322700 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.322710 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:32Z","lastTransitionTime":"2026-01-30T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.325491 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"945ae17d-fe16-4501-bb14-56544b2c13c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3602451d315d0555abce0fd45866f7191ef2b169be6a2b71df9b206844d1eaa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c9696f430b3b9f427ae6573b228d01d9296814e8983dd48ade9374ab323d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e792bd5d0c930c7e45a3b73fdd1c146e50f7d686f9b7ded43e66de3547804b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.361194 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.377974 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.398470 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.412259 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.425385 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.425479 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.425502 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.425530 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.425553 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:32Z","lastTransitionTime":"2026-01-30T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.428153 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.441792 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.460371 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.528607 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.528712 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.528724 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.528742 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.529199 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:32Z","lastTransitionTime":"2026-01-30T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.632637 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.632683 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.632698 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.632717 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.632729 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:32Z","lastTransitionTime":"2026-01-30T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.736122 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.736171 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.736180 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.736201 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.736211 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:32Z","lastTransitionTime":"2026-01-30T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.839610 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.839657 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.839665 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.839679 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.839688 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:32Z","lastTransitionTime":"2026-01-30T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.942144 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.942189 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.942197 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.942213 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:32 crc kubenswrapper[4875]: I0130 16:57:32.942224 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:32Z","lastTransitionTime":"2026-01-30T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.044568 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.044824 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.044838 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.044856 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.044868 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:33Z","lastTransitionTime":"2026-01-30T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.129681 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:31:17.152943778 +0000 UTC Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.147750 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.147796 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.147806 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.147822 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.147832 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:33Z","lastTransitionTime":"2026-01-30T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.250306 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.250352 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.250362 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.250381 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.250393 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:33Z","lastTransitionTime":"2026-01-30T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.353198 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.353258 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.353274 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.353295 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.353308 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:33Z","lastTransitionTime":"2026-01-30T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.455916 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.456032 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.456042 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.456059 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.456069 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:33Z","lastTransitionTime":"2026-01-30T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.559068 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.559114 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.559125 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.559145 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.559155 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:33Z","lastTransitionTime":"2026-01-30T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.662014 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.662080 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.662104 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.662193 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.662217 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:33Z","lastTransitionTime":"2026-01-30T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.764550 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.764631 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.764650 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.764674 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.764693 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:33Z","lastTransitionTime":"2026-01-30T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.867609 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.867672 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.867685 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.867701 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.867712 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:33Z","lastTransitionTime":"2026-01-30T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.969830 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.969884 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.969897 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.969921 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:33 crc kubenswrapper[4875]: I0130 16:57:33.969933 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:33Z","lastTransitionTime":"2026-01-30T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.072644 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.072712 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.072731 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.072756 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.072774 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:34Z","lastTransitionTime":"2026-01-30T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.129858 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:31:59.38880197 +0000 UTC Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.135383 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.135461 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.135442 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:34 crc kubenswrapper[4875]: E0130 16:57:34.135623 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.135608 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:34 crc kubenswrapper[4875]: E0130 16:57:34.135905 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:34 crc kubenswrapper[4875]: E0130 16:57:34.136014 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:34 crc kubenswrapper[4875]: E0130 16:57:34.136169 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.175966 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.176290 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.176363 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.176435 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.176502 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:34Z","lastTransitionTime":"2026-01-30T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.283568 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.283632 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.283645 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.283661 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.283671 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:34Z","lastTransitionTime":"2026-01-30T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.386349 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.386392 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.386404 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.386423 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.386436 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:34Z","lastTransitionTime":"2026-01-30T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.489187 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.489225 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.489234 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.489269 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.489279 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:34Z","lastTransitionTime":"2026-01-30T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.592243 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.592652 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.592721 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.592786 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.592855 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:34Z","lastTransitionTime":"2026-01-30T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.695601 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.695934 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.696012 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.696088 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.696166 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:34Z","lastTransitionTime":"2026-01-30T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.798474 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.798509 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.798518 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.798531 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.798539 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:34Z","lastTransitionTime":"2026-01-30T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.901158 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.901209 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.901221 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.901239 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:34 crc kubenswrapper[4875]: I0130 16:57:34.901249 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:34Z","lastTransitionTime":"2026-01-30T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.003297 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.003349 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.003378 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.003399 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.003412 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:35Z","lastTransitionTime":"2026-01-30T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.106134 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.106185 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.106197 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.106216 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.106231 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:35Z","lastTransitionTime":"2026-01-30T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.130516 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:56:26.205778204 +0000 UTC Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.208882 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.208928 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.208940 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.208957 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.208971 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:35Z","lastTransitionTime":"2026-01-30T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.312035 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.312090 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.312103 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.312127 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.312140 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:35Z","lastTransitionTime":"2026-01-30T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.415337 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.415391 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.415405 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.415426 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.415439 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:35Z","lastTransitionTime":"2026-01-30T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.517694 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.518527 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.518638 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.518726 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.518803 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:35Z","lastTransitionTime":"2026-01-30T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.621312 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.621356 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.621367 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.621385 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.621396 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:35Z","lastTransitionTime":"2026-01-30T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.723726 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.723771 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.723782 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.723801 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.723813 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:35Z","lastTransitionTime":"2026-01-30T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.826188 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.826239 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.826248 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.826265 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.826275 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:35Z","lastTransitionTime":"2026-01-30T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.929077 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.929121 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.929130 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.929144 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:35 crc kubenswrapper[4875]: I0130 16:57:35.929157 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:35Z","lastTransitionTime":"2026-01-30T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.032243 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.032314 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.032327 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.032349 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.032361 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:36Z","lastTransitionTime":"2026-01-30T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.131067 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:09:02.35561906 +0000 UTC Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.134915 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.134985 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.134914 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:36 crc kubenswrapper[4875]: E0130 16:57:36.135064 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:36 crc kubenswrapper[4875]: E0130 16:57:36.135134 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.135173 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.135200 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.135211 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.135228 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.135240 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:36Z","lastTransitionTime":"2026-01-30T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:36 crc kubenswrapper[4875]: E0130 16:57:36.135268 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.135535 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:36 crc kubenswrapper[4875]: E0130 16:57:36.135749 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.237404 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.237451 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.237460 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.237477 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.237485 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:36Z","lastTransitionTime":"2026-01-30T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.339967 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.340281 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.340411 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.340502 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.340660 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:36Z","lastTransitionTime":"2026-01-30T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.443178 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.443218 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.443227 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.443242 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.443257 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:36Z","lastTransitionTime":"2026-01-30T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.480616 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:36 crc kubenswrapper[4875]: E0130 16:57:36.480866 4875 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:36 crc kubenswrapper[4875]: E0130 16:57:36.481177 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs podName:64282947-3e36-453a-b460-ada872b157c9 nodeName:}" failed. No retries permitted until 2026-01-30 16:58:08.481156262 +0000 UTC m=+99.028519645 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs") pod "network-metrics-daemon-ptnnq" (UID: "64282947-3e36-453a-b460-ada872b157c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.548011 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.548137 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.548148 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.548164 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.548174 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:36Z","lastTransitionTime":"2026-01-30T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.651043 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.651358 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.651507 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.651638 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.651736 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:36Z","lastTransitionTime":"2026-01-30T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.754243 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.754566 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.754664 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.754730 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.754802 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:36Z","lastTransitionTime":"2026-01-30T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.858311 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.858371 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.858389 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.858412 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.858427 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:36Z","lastTransitionTime":"2026-01-30T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.961384 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.961421 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.961433 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.961450 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:36 crc kubenswrapper[4875]: I0130 16:57:36.961463 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:36Z","lastTransitionTime":"2026-01-30T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.063831 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.063881 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.063891 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.063905 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.063914 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:37Z","lastTransitionTime":"2026-01-30T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.131708 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:02:39.124499165 +0000 UTC Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.165978 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.166025 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.166035 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.166050 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.166061 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:37Z","lastTransitionTime":"2026-01-30T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.268861 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.269170 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.269275 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.269380 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.269457 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:37Z","lastTransitionTime":"2026-01-30T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.372194 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.372246 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.372259 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.372279 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.372292 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:37Z","lastTransitionTime":"2026-01-30T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.475516 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.475554 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.475564 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.475598 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.475614 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:37Z","lastTransitionTime":"2026-01-30T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.577962 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.578002 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.578012 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.578030 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.578042 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:37Z","lastTransitionTime":"2026-01-30T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.680443 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.680499 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.680515 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.680536 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.680565 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:37Z","lastTransitionTime":"2026-01-30T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.782598 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.782639 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.782649 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.782666 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.782676 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:37Z","lastTransitionTime":"2026-01-30T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.885726 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.885790 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.885809 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.885836 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.885854 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:37Z","lastTransitionTime":"2026-01-30T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.988787 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.988852 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.988864 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.988886 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:37 crc kubenswrapper[4875]: I0130 16:57:37.988896 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:37Z","lastTransitionTime":"2026-01-30T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.091517 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.091567 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.091577 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.091611 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.091620 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.132256 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:03:39.267484183 +0000 UTC Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.135430 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.135509 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.135475 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.135424 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:38 crc kubenswrapper[4875]: E0130 16:57:38.135709 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:38 crc kubenswrapper[4875]: E0130 16:57:38.135805 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:38 crc kubenswrapper[4875]: E0130 16:57:38.135882 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:38 crc kubenswrapper[4875]: E0130 16:57:38.135935 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.155627 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.155672 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.155682 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.155697 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.155706 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: E0130 16:57:38.167121 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.179400 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.179441 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.179452 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.179472 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.179493 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: E0130 16:57:38.192129 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.196711 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.196755 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.196769 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.196789 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.196801 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: E0130 16:57:38.208218 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.211318 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.211348 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.211356 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.211371 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.211381 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: E0130 16:57:38.222804 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.225815 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.225844 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.225856 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.225873 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.225885 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: E0130 16:57:38.237736 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: E0130 16:57:38.237867 4875 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.240091 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.240136 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.240145 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.240164 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.240176 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.347503 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.347553 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.347562 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.347577 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.347602 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.449804 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.449882 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.449896 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.449914 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.449926 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.549231 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ck4hq_562b7bc8-0631-497c-9b8a-05af82dcfff9/kube-multus/0.log" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.549298 4875 generic.go:334] "Generic (PLEG): container finished" podID="562b7bc8-0631-497c-9b8a-05af82dcfff9" containerID="3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a" exitCode=1 Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.549336 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ck4hq" event={"ID":"562b7bc8-0631-497c-9b8a-05af82dcfff9","Type":"ContainerDied","Data":"3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a"} Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.549912 4875 scope.go:117] "RemoveContainer" containerID="3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.552039 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.552092 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.552106 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.552121 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.552133 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.565800 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.580439 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.594440 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"2026-01-30T16:56:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4\\\\n2026-01-30T16:56:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4 to /host/opt/cni/bin/\\\\n2026-01-30T16:56:53Z [verbose] multus-daemon started\\\\n2026-01-30T16:56:53Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:57:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.604384 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.617637 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.631510 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.641239 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.655789 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.655839 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.655853 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.655874 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.655886 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.656299 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.676563 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:16Z\\\",\\\"message\\\":\\\":29103\\\\\\\"\\\\nI0130 16:57:16.186196 6505 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:57:16.186232 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ck4hq\\\\nF0130 16:57:16.186242 6505 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set nod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.688384 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.703602 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.717549 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.733413 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.748774 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"945ae17d-fe16-4501-bb14-56544b2c13c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3602451d315d0555abce0fd45866f7191ef2b169be6a2b71df9b206844d1eaa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c9696f430b3b9f427ae6573b228d01d9296814e8983dd48ade9374ab323d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e792bd5d0c930c7e45a3b73fdd1c146e50f7d686f9b7ded43e66de3547804b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.758149 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.758175 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.758184 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.758197 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.758205 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.771495 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.787472 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.800145 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.850862 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.860627 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.860681 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.860695 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.860717 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.860738 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.963342 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.963370 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.963380 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.963394 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:38 crc kubenswrapper[4875]: I0130 16:57:38.963404 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:38Z","lastTransitionTime":"2026-01-30T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.065860 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.065922 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.065931 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.065944 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.065953 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:39Z","lastTransitionTime":"2026-01-30T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.132831 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 08:04:13.773894996 +0000 UTC Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.168407 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.168461 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.168471 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.168493 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.168504 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:39Z","lastTransitionTime":"2026-01-30T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.271076 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.271117 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.271127 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.271142 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.271151 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:39Z","lastTransitionTime":"2026-01-30T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.373992 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.374043 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.374054 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.374071 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.374083 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:39Z","lastTransitionTime":"2026-01-30T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.475697 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.475761 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.475774 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.475797 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.475809 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:39Z","lastTransitionTime":"2026-01-30T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.554135 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ck4hq_562b7bc8-0631-497c-9b8a-05af82dcfff9/kube-multus/0.log" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.554212 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ck4hq" event={"ID":"562b7bc8-0631-497c-9b8a-05af82dcfff9","Type":"ContainerStarted","Data":"3b26a1f922e0214d976c84feb63e7ad8957d0d356ff5287eb78b1a6eaf4564ac"} Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.570511 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.578163 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.578207 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.578224 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.578249 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.578264 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:39Z","lastTransitionTime":"2026-01-30T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.584201 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.599832 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.614922 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.627429 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.639918 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"945ae17d-fe16-4501-bb14-56544b2c13c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3602451d315d0555abce0fd45866f7191ef2b169be6a2b71df9b206844d1eaa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c9696f430b3b9f427ae6573b228d01d9296814e8983dd48ade9374ab323d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e792bd5d0c930c7e45a3b73fdd1c146e50f7d686f9b7ded43e66de3547804b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.664032 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.674125 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.680780 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.680821 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.680832 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.680849 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.680860 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:39Z","lastTransitionTime":"2026-01-30T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.684054 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.696460 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.706990 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.718916 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b26a1f922e0214d976c84feb63e7ad8957d0d356ff5287eb78b1a6eaf4564ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"2026-01-30T16:56:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4\\\\n2026-01-30T16:56:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4 to /host/opt/cni/bin/\\\\n2026-01-30T16:56:53Z [verbose] multus-daemon started\\\\n2026-01-30T16:56:53Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:57:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.731309 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.750950 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:16Z\\\",\\\"message\\\":\\\":29103\\\\\\\"\\\\nI0130 16:57:16.186196 6505 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:57:16.186232 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ck4hq\\\\nF0130 16:57:16.186242 6505 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set nod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.764234 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.775925 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.783216 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.783276 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.783285 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.783303 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.783314 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:39Z","lastTransitionTime":"2026-01-30T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.785809 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.800745 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.885900 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.886207 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.886285 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.886359 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.886426 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:39Z","lastTransitionTime":"2026-01-30T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.988977 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.989042 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.989057 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.989081 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:39 crc kubenswrapper[4875]: I0130 16:57:39.989102 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:39Z","lastTransitionTime":"2026-01-30T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.091981 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.092051 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.092071 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.092100 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.092119 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:40Z","lastTransitionTime":"2026-01-30T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.134247 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:35:30.001397372 +0000 UTC Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.135576 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.135689 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.135787 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.135906 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:40 crc kubenswrapper[4875]: E0130 16:57:40.136026 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:40 crc kubenswrapper[4875]: E0130 16:57:40.135893 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:40 crc kubenswrapper[4875]: E0130 16:57:40.136117 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:40 crc kubenswrapper[4875]: E0130 16:57:40.136176 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.148394 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.164010 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b26a1f922e0214d976c84feb63e7ad8957d0d356ff5287eb78b1a6eaf4564ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"2026-01-30T16:56:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4\\\\n2026-01-30T16:56:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4 to /host/opt/cni/bin/\\\\n2026-01-30T16:56:53Z [verbose] multus-daemon started\\\\n2026-01-30T16:56:53Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:57:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.178406 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.193176 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.195892 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.195936 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.195946 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.195965 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.195976 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:40Z","lastTransitionTime":"2026-01-30T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.208074 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.223383 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.244273 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.266653 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:16Z\\\",\\\"message\\\":\\\":29103\\\\\\\"\\\\nI0130 16:57:16.186196 6505 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:57:16.186232 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ck4hq\\\\nF0130 16:57:16.186242 6505 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set nod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.281469 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.294394 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"945ae17d-fe16-4501-bb14-56544b2c13c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3602451d315d0555abce0fd45866f7191ef2b169be6a2b71df9b206844d1eaa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c9696f430b3b9f427ae6573b228d01d9296814e8983dd48ade9374ab323d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e792bd5d0c930c7e45a3b73fdd1c146e50f7d686f9b7ded43e66de3547804b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.297900 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.297942 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.297955 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.297973 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.297986 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:40Z","lastTransitionTime":"2026-01-30T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.315743 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.328689 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.341423 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.352080 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.364889 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.378711 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.392884 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.400232 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.400263 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.400273 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.400289 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.400302 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:40Z","lastTransitionTime":"2026-01-30T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.408378 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.502902 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.502956 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.502974 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.503095 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.503116 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:40Z","lastTransitionTime":"2026-01-30T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.605635 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.605679 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.605689 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.605705 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.605724 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:40Z","lastTransitionTime":"2026-01-30T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.707439 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.707503 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.707525 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.707553 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.707574 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:40Z","lastTransitionTime":"2026-01-30T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.809819 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.809853 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.809864 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.809881 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.809891 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:40Z","lastTransitionTime":"2026-01-30T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.912142 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.912170 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.912178 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.912194 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:40 crc kubenswrapper[4875]: I0130 16:57:40.912203 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:40Z","lastTransitionTime":"2026-01-30T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.014072 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.014105 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.014117 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.014137 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.014150 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:41Z","lastTransitionTime":"2026-01-30T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.116501 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.116549 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.116559 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.116576 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.116601 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:41Z","lastTransitionTime":"2026-01-30T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.134905 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:18:33.413714377 +0000 UTC Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.219030 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.219067 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.219076 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.219091 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.219100 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:41Z","lastTransitionTime":"2026-01-30T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.321231 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.321268 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.321276 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.321292 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.321301 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:41Z","lastTransitionTime":"2026-01-30T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.423087 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.423134 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.423142 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.423157 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.423167 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:41Z","lastTransitionTime":"2026-01-30T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.525163 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.525210 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.525222 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.525239 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.525250 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:41Z","lastTransitionTime":"2026-01-30T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.627415 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.627451 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.627461 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.627474 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.627485 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:41Z","lastTransitionTime":"2026-01-30T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.730096 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.730136 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.730148 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.730164 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.730173 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:41Z","lastTransitionTime":"2026-01-30T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.832655 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.832701 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.832709 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.832724 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.832736 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:41Z","lastTransitionTime":"2026-01-30T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.934665 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.934708 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.934720 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.934736 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:41 crc kubenswrapper[4875]: I0130 16:57:41.934747 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:41Z","lastTransitionTime":"2026-01-30T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.036538 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.036626 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.036639 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.036660 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.036672 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:42Z","lastTransitionTime":"2026-01-30T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.135185 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:55:07.859302954 +0000 UTC Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.135348 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.135387 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.135353 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:42 crc kubenswrapper[4875]: E0130 16:57:42.135464 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.135631 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:42 crc kubenswrapper[4875]: E0130 16:57:42.135628 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:42 crc kubenswrapper[4875]: E0130 16:57:42.135783 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:42 crc kubenswrapper[4875]: E0130 16:57:42.135874 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.138568 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.138615 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.138630 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.138643 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.138652 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:42Z","lastTransitionTime":"2026-01-30T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.240490 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.240538 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.240549 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.240567 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.240595 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:42Z","lastTransitionTime":"2026-01-30T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.343002 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.343048 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.343080 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.343097 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.343107 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:42Z","lastTransitionTime":"2026-01-30T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.446243 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.446282 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.446290 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.446307 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.446317 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:42Z","lastTransitionTime":"2026-01-30T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.549004 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.549053 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.549064 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.549084 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.549094 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:42Z","lastTransitionTime":"2026-01-30T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.651663 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.651696 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.651707 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.651723 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.651733 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:42Z","lastTransitionTime":"2026-01-30T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.754032 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.754081 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.754093 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.754109 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.754120 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:42Z","lastTransitionTime":"2026-01-30T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.859362 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.859409 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.859420 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.859439 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.859456 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:42Z","lastTransitionTime":"2026-01-30T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.962151 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.962199 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.962212 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.962230 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:42 crc kubenswrapper[4875]: I0130 16:57:42.962243 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:42Z","lastTransitionTime":"2026-01-30T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.065349 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.065392 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.065403 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.065419 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.065430 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:43Z","lastTransitionTime":"2026-01-30T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.135301 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:38:01.437044977 +0000 UTC Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.168548 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.168637 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.168661 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.168690 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.168711 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:43Z","lastTransitionTime":"2026-01-30T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.271172 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.271206 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.271215 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.271230 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.271239 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:43Z","lastTransitionTime":"2026-01-30T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.373564 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.373653 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.373667 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.373689 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.373708 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:43Z","lastTransitionTime":"2026-01-30T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.475757 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.475798 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.475808 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.475822 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.475832 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:43Z","lastTransitionTime":"2026-01-30T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.578627 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.578692 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.578704 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.578722 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.578736 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:43Z","lastTransitionTime":"2026-01-30T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.681220 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.681264 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.681275 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.681291 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.681304 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:43Z","lastTransitionTime":"2026-01-30T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.784647 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.784703 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.784719 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.784739 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.784750 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:43Z","lastTransitionTime":"2026-01-30T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.888145 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.888196 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.888211 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.888232 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.888243 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:43Z","lastTransitionTime":"2026-01-30T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.994287 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.994382 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.994408 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.994449 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:43 crc kubenswrapper[4875]: I0130 16:57:43.994474 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:43Z","lastTransitionTime":"2026-01-30T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.097817 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.097890 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.097909 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.097937 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.097958 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:44Z","lastTransitionTime":"2026-01-30T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.135643 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.135528 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:47:43.150121037 +0000 UTC Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.135786 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.135790 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:44 crc kubenswrapper[4875]: E0130 16:57:44.135922 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.135967 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:44 crc kubenswrapper[4875]: E0130 16:57:44.136117 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:44 crc kubenswrapper[4875]: E0130 16:57:44.136306 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:44 crc kubenswrapper[4875]: E0130 16:57:44.136462 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.149903 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.200829 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.200886 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.200907 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.200934 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.200956 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:44Z","lastTransitionTime":"2026-01-30T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.305080 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.305155 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.305173 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.305202 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.305224 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:44Z","lastTransitionTime":"2026-01-30T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.408361 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.408432 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.408451 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.408485 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.408506 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:44Z","lastTransitionTime":"2026-01-30T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.511985 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.512051 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.512065 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.512087 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.512102 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:44Z","lastTransitionTime":"2026-01-30T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.614107 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.614230 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.614252 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.614274 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.614287 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:44Z","lastTransitionTime":"2026-01-30T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.716709 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.716743 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.716753 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.716768 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.716777 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:44Z","lastTransitionTime":"2026-01-30T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.819075 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.819107 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.819114 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.819147 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.819157 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:44Z","lastTransitionTime":"2026-01-30T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.921780 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.921830 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.921842 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.921862 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:44 crc kubenswrapper[4875]: I0130 16:57:44.921876 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:44Z","lastTransitionTime":"2026-01-30T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.025500 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.025571 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.025610 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.025641 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.025654 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:45Z","lastTransitionTime":"2026-01-30T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.128632 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.128674 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.128683 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.128705 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.128718 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:45Z","lastTransitionTime":"2026-01-30T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.136000 4875 scope.go:117] "RemoveContainer" containerID="d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.136327 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:59:01.695908547 +0000 UTC Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.230478 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.230525 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.230534 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.230548 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.230557 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:45Z","lastTransitionTime":"2026-01-30T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.333008 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.333047 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.333056 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.333070 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.333080 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:45Z","lastTransitionTime":"2026-01-30T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.341438 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.435608 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.435646 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.435654 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.435684 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.435695 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:45Z","lastTransitionTime":"2026-01-30T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.539134 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.539184 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.539194 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.539212 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.539223 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:45Z","lastTransitionTime":"2026-01-30T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.574904 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/2.log" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.577148 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerStarted","Data":"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c"} Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.578368 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.592492 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.605876 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.618214 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.629321 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.641764 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.641884 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.641910 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.641940 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.641996 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:45Z","lastTransitionTime":"2026-01-30T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.644241 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b26a1f922e0214d976c84feb63e7ad8957d0d356ff5287eb78b1a6eaf4564ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"2026-01-30T16:56:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4\\\\n2026-01-30T16:56:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4 to /host/opt/cni/bin/\\\\n2026-01-30T16:56:53Z [verbose] multus-daemon started\\\\n2026-01-30T16:56:53Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:57:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.659096 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.680509 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:16Z\\\",\\\"message\\\":\\\":29103\\\\\\\"\\\\nI0130 16:57:16.186196 6505 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:57:16.186232 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ck4hq\\\\nF0130 16:57:16.186242 6505 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set nod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.694735 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.712302 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.729210 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.745243 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.745296 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.745305 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.745325 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.745360 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:45Z","lastTransitionTime":"2026-01-30T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.755927 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.769836 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.784293 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.796964 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.809018 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.824702 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.838654 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"945ae17d-fe16-4501-bb14-56544b2c13c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3602451d315d0555abce0fd45866f7191ef2b169be6a2b71df9b206844d1eaa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c9696f430b3b9f427ae6573b228d01d9296814e8983dd48ade9374ab323d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e792bd5d0c930c7e45a3b73fdd1c146e50f7d686f9b7ded43e66de3547804b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.848380 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.848421 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.848431 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.848449 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.848460 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:45Z","lastTransitionTime":"2026-01-30T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.852565 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd672ea8-8746-4e5c-a411-562c052c6f7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fc6c88a382e130d540ed1bbf460e3d8de5f41d159555c7e8040b2816b320f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8898eafcfe22a7ee768bab7d5557199f7e90f22053ffaea0d39edf906c69889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8898eafcfe22a7ee768bab7d5557199f7e90f22053ffaea0d39edf906c69889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.864892 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.950942 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.950981 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.950993 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.951013 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:45 crc kubenswrapper[4875]: I0130 16:57:45.951025 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:45Z","lastTransitionTime":"2026-01-30T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.053012 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.053056 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.053071 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.053097 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.053111 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:46Z","lastTransitionTime":"2026-01-30T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.135661 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.135747 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.135685 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.135663 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:46 crc kubenswrapper[4875]: E0130 16:57:46.135833 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:46 crc kubenswrapper[4875]: E0130 16:57:46.135883 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:46 crc kubenswrapper[4875]: E0130 16:57:46.135971 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:46 crc kubenswrapper[4875]: E0130 16:57:46.136020 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.136561 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:20:10.217787469 +0000 UTC Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.155895 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.155949 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.155966 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.155990 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.156007 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:46Z","lastTransitionTime":"2026-01-30T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.259253 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.259304 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.259319 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.259335 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.259349 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:46Z","lastTransitionTime":"2026-01-30T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.362407 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.362482 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.362490 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.362507 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.362518 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:46Z","lastTransitionTime":"2026-01-30T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.465192 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.465273 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.465289 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.465312 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.465330 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:46Z","lastTransitionTime":"2026-01-30T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.574416 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.574471 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.574486 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.574544 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.574558 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:46Z","lastTransitionTime":"2026-01-30T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.581300 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/3.log" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.582135 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/2.log" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.584686 4875 generic.go:334] "Generic (PLEG): container finished" podID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerID="41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c" exitCode=1 Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.584731 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c"} Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.584784 4875 scope.go:117] "RemoveContainer" containerID="d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.585651 4875 scope.go:117] "RemoveContainer" containerID="41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c" Jan 30 16:57:46 crc kubenswrapper[4875]: E0130 16:57:46.585908 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.601903 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.615546 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.638047 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b26a1f922e0214d976c84feb63e7ad8957d0d356ff5287eb78b1a6eaf4564ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"2026-01-30T16:56:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4\\\\n2026-01-30T16:56:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4 to /host/opt/cni/bin/\\\\n2026-01-30T16:56:53Z [verbose] multus-daemon started\\\\n2026-01-30T16:56:53Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:57:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.650153 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.667493 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.676500 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.676538 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.676546 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.676564 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.676577 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:46Z","lastTransitionTime":"2026-01-30T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.681644 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.692651 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.707352 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.726790 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d04160d477e03859c1d2c61303eda05de53723bc8bdd378e47cc61abba2b6272\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:16Z\\\",\\\"message\\\":\\\":29103\\\\\\\"\\\\nI0130 16:57:16.186196 6505 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:57:16.186232 6505 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-ck4hq\\\\nF0130 16:57:16.186242 6505 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set nod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:46Z\\\",\\\"message\\\":\\\"e hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.139:17698:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:57:45.997133 6889 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:57:45.997166 6889 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.739560 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.754485 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.766920 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"945ae17d-fe16-4501-bb14-56544b2c13c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3602451d315d0555abce0fd45866f7191ef2b169be6a2b71df9b206844d1eaa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c9696f430b3b9f427ae6573b228d01d9296814e8983dd48ade9374ab323d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e792bd5d0c930c7e45a3b73fdd1c146e50f7d686f9b7ded43e66de3547804b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.779317 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.779389 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.779401 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.779421 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.779458 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:46Z","lastTransitionTime":"2026-01-30T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.781082 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd672ea8-8746-4e5c-a411-562c052c6f7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fc6c88a382e130d540ed1bbf460e3d8de5f41d159555c7e8040b2816b320f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8898eafcfe22a7ee768bab7d5557199f7e90f22053ffaea0d39edf906c69889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8898eafcfe22a7ee768bab7d5557199f7e90f22053ffaea0d39edf906c69889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.799782 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.814813 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.828833 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.845407 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.857795 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.872364 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.882528 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.882566 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.882575 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.882606 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.882618 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:46Z","lastTransitionTime":"2026-01-30T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.985570 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.985654 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.985666 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.985687 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:46 crc kubenswrapper[4875]: I0130 16:57:46.985698 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:46Z","lastTransitionTime":"2026-01-30T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.087867 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.087913 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.087923 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.087941 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.087951 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:47Z","lastTransitionTime":"2026-01-30T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.137746 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:40:04.906432811 +0000 UTC Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.190356 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.190395 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.190404 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.190420 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.190435 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:47Z","lastTransitionTime":"2026-01-30T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.292984 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.293085 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.293104 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.293138 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.293156 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:47Z","lastTransitionTime":"2026-01-30T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.395106 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.395150 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.395163 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.395185 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.395196 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:47Z","lastTransitionTime":"2026-01-30T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.497837 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.497920 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.497948 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.497984 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.498007 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:47Z","lastTransitionTime":"2026-01-30T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.591481 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/3.log" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.597339 4875 scope.go:117] "RemoveContainer" containerID="41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c" Jan 30 16:57:47 crc kubenswrapper[4875]: E0130 16:57:47.597813 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.600045 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.600120 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.600136 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.600153 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.600166 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:47Z","lastTransitionTime":"2026-01-30T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.616914 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd672ea8-8746-4e5c-a411-562c052c6f7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fc6c88a382e130d540ed1bbf460e3d8de5f41d159555c7e8040b2816b320f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8898eafcfe22a7ee768bab7d5557199f7e90f22053ffaea0d39edf906c69889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8898eafcfe22a7ee768bab7d5557199f7e90f22053ffaea0d39edf906c69889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.649559 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.664789 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.680384 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.694744 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.701862 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.701896 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.701905 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.701920 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.701931 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:47Z","lastTransitionTime":"2026-01-30T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.711136 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.723652 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.736746 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"945ae17d-fe16-4501-bb14-56544b2c13c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3602451d315d0555abce0fd45866f7191ef2b169be6a2b71df9b206844d1eaa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c9696f430b3b9f427ae6573b228d01d9296814e8983dd48ade9374ab323d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e792bd5d0c930c7e45a3b73fdd1c146e50f7d686f9b7ded43e66de3547804b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.748725 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.763713 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.776231 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b26a1f922e0214d976c84feb63e7ad8957d0d356ff5287eb78b1a6eaf4564ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"2026-01-30T16:56:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4\\\\n2026-01-30T16:56:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4 to /host/opt/cni/bin/\\\\n2026-01-30T16:56:53Z [verbose] multus-daemon started\\\\n2026-01-30T16:56:53Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:57:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.786803 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.801079 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.804982 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.805121 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.805481 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.805720 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.805880 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:47Z","lastTransitionTime":"2026-01-30T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.810226 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.821631 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.843176 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.860186 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:46Z\\\",\\\"message\\\":\\\"e hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.139:17698:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:57:45.997133 6889 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:57:45.997166 6889 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.874487 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.885671 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.909239 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.909312 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.909326 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.909345 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:47 crc kubenswrapper[4875]: I0130 16:57:47.909357 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:47Z","lastTransitionTime":"2026-01-30T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.012849 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.012907 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.012920 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.012942 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.012960 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.114834 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.115053 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.115122 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.115209 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.115303 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.135718 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.135718 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.135755 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:48 crc kubenswrapper[4875]: E0130 16:57:48.135998 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:48 crc kubenswrapper[4875]: E0130 16:57:48.135951 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:48 crc kubenswrapper[4875]: E0130 16:57:48.136034 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.135767 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:48 crc kubenswrapper[4875]: E0130 16:57:48.136085 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.137883 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:50:22.332282702 +0000 UTC Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.217927 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.218246 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.218333 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.218411 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.218477 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.321124 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.321478 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.321632 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.321725 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.321861 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.425226 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.425673 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.425861 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.426008 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.426146 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.430529 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.430754 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.431004 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.431272 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.431540 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: E0130 16:57:48.450824 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.455798 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.455830 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.455841 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.455855 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.455866 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: E0130 16:57:48.467531 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.471909 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.471950 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.471964 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.471979 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.471990 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: E0130 16:57:48.491450 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.496020 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.496093 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.496112 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.496159 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.496179 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: E0130 16:57:48.512039 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.517509 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.517565 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.517576 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.517625 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.517642 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: E0130 16:57:48.536475 4875 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58694c46-6e56-4811-9d59-25ba706e9ec3\\\",\\\"systemUUID\\\":\\\"1622a68f-c9e8-4b6d-b2e7-c5e881732b1e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:48 crc kubenswrapper[4875]: E0130 16:57:48.536624 4875 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.538906 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.538963 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.538977 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.538999 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.539015 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.642058 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.642113 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.642125 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.642149 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.642161 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.745494 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.745557 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.745570 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.745607 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.745620 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.847948 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.848024 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.848038 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.848057 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.848071 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.950909 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.950949 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.950959 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.950975 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:48 crc kubenswrapper[4875]: I0130 16:57:48.950984 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:48Z","lastTransitionTime":"2026-01-30T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.053849 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.053926 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.053950 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.053974 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.053995 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:49Z","lastTransitionTime":"2026-01-30T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.138564 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:10:55.830375633 +0000 UTC Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.157029 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.157093 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.157113 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.157133 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.157146 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:49Z","lastTransitionTime":"2026-01-30T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.259555 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.259666 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.259689 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.259719 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.259743 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:49Z","lastTransitionTime":"2026-01-30T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.362130 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.362181 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.362198 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.362223 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.362239 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:49Z","lastTransitionTime":"2026-01-30T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.465265 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.465316 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.465337 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.465370 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.465387 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:49Z","lastTransitionTime":"2026-01-30T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.568798 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.568835 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.568845 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.568859 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.568871 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:49Z","lastTransitionTime":"2026-01-30T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.672143 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.672198 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.672214 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.672239 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.672258 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:49Z","lastTransitionTime":"2026-01-30T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.775041 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.775074 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.775083 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.775098 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.775108 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:49Z","lastTransitionTime":"2026-01-30T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.878402 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.878459 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.878474 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.878499 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.878516 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:49Z","lastTransitionTime":"2026-01-30T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.981091 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.981143 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.981155 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.981176 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:49 crc kubenswrapper[4875]: I0130 16:57:49.981189 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:49Z","lastTransitionTime":"2026-01-30T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.084312 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.084364 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.084381 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.084403 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.084416 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:50Z","lastTransitionTime":"2026-01-30T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.135292 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.135438 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:50 crc kubenswrapper[4875]: E0130 16:57:50.135564 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.135729 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.135776 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:50 crc kubenswrapper[4875]: E0130 16:57:50.135846 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:50 crc kubenswrapper[4875]: E0130 16:57:50.135992 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:50 crc kubenswrapper[4875]: E0130 16:57:50.136105 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.138757 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:01:46.362312145 +0000 UTC Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.154450 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7c2684f036ddf6233609a58a1347b58d7eea159b983958bd37955c4114a7d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.169974 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rzl5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92bbdc00-4565-4f08-90ef-b14644f90a87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c754261319fad10a4eccbefbc8891c88603ee473937a45efba3386b555f6ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8slsr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rzl5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.187666 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.187703 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.187711 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.187742 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.187752 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:50Z","lastTransitionTime":"2026-01-30T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.187734 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ck4hq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"562b7bc8-0631-497c-9b8a-05af82dcfff9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b26a1f922e0214d976c84feb63e7ad8957d0d356ff5287eb78b1a6eaf4564ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:38Z\\\",\\\"message\\\":\\\"2026-01-30T16:56:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4\\\\n2026-01-30T16:56:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f822f6ce-8193-4deb-a1f4-ed8465244ab4 to /host/opt/cni/bin/\\\\n2026-01-30T16:56:53Z [verbose] multus-daemon started\\\\n2026-01-30T16:56:53Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:57:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mnrgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ck4hq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.204708 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92a13cd1-8c0d-4eab-b29c-5fe6d1598629\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fdb34b6f0a28383b063244f9229d8a4d46f8e33104f7a3cad58b8b3188ff582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9ae124864c3ff9984c3b20615ed908dc0f7b190f322642d97dbd0338aea92d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:57:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd5fp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5rzl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.222492 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"958d4578-6434-4ac1-8cb6-b20988d13e90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"-03-01 16:56:34 +0000 UTC (now=2026-01-30 16:56:50.048297894 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048502 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:56:50.048543 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:56:50.048576 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792204\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792204\\\\\\\\\\\\\\\" (2026-01-30 15:56:44 +0000 UTC to 2027-01-30 15:56:44 +0000 UTC (now=2026-01-30 16:56:50.048551562 +0000 UTC))\\\\\\\"\\\\nI0130 16:56:50.048629 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:56:50.048655 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:56:50.048685 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115081983/tls.crt::/tmp/serving-cert-3115081983/tls.key\\\\\\\"\\\\nI0130 16:56:50.048361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:56:50.048849 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048863 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:56:50.048883 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:56:50.048892 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:56:50.048863 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0130 16:56:50.050831 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.236954 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7886ef658610c765675008914773241223e1612ceb7fa9c5275c8c300550b63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.249797 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nnzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6705291-da0f-49bd-acc7-6c2e027a3b54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75ffac6a67aa826a95b2a7d209006d987ff49ecd386dada77c486cb2729837d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7fvbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nnzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.266575 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f2be659-2cd0-4935-bf58-3e7681692d9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c469c74bbb6b40861fff99e6dda5be0f9ea79c552ee9e7c68421d22454d8c015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3731fad738036a5440e97600b11742dee49ce00bb356495b08d7df55b504f78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c533e85de5e6d65cc2760a62f0f426fddf9a405f44db4732d1db36a7dbdbddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80e1f3bd068790f19fecb944224433532671a87e396ed7df383275823daa8be5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b3b19f0b4089d325ce487b572acfa72996df4e0c61e14be2e23ee3c1f5dc905\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e79d8d31ed795c916d5baf5fd50f978d712fadd30a4b46c08c91b30e4aac37c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648183f4bb00a4a37dbc48f1b6947762f9e7339f91fe66d2515c5ffc3d020fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nk4gt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hqmqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.289967 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.290033 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.290049 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.290069 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.290083 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:50Z","lastTransitionTime":"2026-01-30T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.296118 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85cf29f6-017d-475a-b63c-cd1cab3c8132\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:57:46Z\\\",\\\"message\\\":\\\"e hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.139:17698:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8efa4d1a-72f5-4dfa-9bc2-9d93ef11ecf2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0130 16:57:45.997133 6889 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-operator/metrics]} name:Service_openshift-ingress-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.244:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d8772e82-b0a4-4596-87d3-3d517c13344b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:57:45.997166 6889 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:57:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fbb6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mps6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.307044 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.318149 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df3492d6-93b5-4282-a2ff-f9073a535190\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://041ce057565cd173e15d19ecda136a19d269d54725d1b2cf8f169e7cbab9697d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdd3928dff4101ccf005831ae6b4301a7749ec006cdd309f9293198a85a73bb0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ddaa7d2a192e5a2555c810638cca997af42114ca17cdfff9032cba241b114e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.327438 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"945ae17d-fe16-4501-bb14-56544b2c13c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3602451d315d0555abce0fd45866f7191ef2b169be6a2b71df9b206844d1eaa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9c9696f430b3b9f427ae6573b228d01d9296814e8983dd48ade9374ab323d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41e792bd5d0c930c7e45a3b73fdd1c146e50f7d686f9b7ded43e66de3547804b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7552112ddcf2a1e09be49ac503c15595c1c285b0734f14f9f5f1b59ac7b48bd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.335697 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd672ea8-8746-4e5c-a411-562c052c6f7f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fc6c88a382e130d540ed1bbf460e3d8de5f41d159555c7e8040b2816b320f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8898eafcfe22a7ee768bab7d5557199f7e90f22053ffaea0d39edf906c69889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8898eafcfe22a7ee768bab7d5557199f7e90f22053ffaea0d39edf906c69889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.358860 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6efb31b8-0a6d-4c75-8a72-8133de6c6585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4647d960cf339572906a67da5fa422158e0b535a062714a74b7eb977c0e1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e216e3ef61ea56a1a905cbfaa41485ccab49d6d201e26e42186491e75f2c23e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3e08bdc31143e8843617d681af12b82f25ea681be4e9c2c001a037587558e1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccbb324fdbc91f01428b7fef44266df448490217d077f24b6cd8386bfe407bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba5c4796bd43b39387ac3e85b0c8fccde3c5d064af6b0b1f5dee93174d8a22a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ba17c1eed8cb19f17dd642615be7e322ad3b52da15b628a26bd1f3304d9c31d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37e65491e7fbcb4194eb4e267c064075b0725531527f53fc253c88b138957d99\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:32Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a52a050429aab759d1cbca37f6d2f1fa380b844a11e0660487dd134c97ed86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.372067 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.384227 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.391878 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.391921 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.391932 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.391950 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.391961 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:50Z","lastTransitionTime":"2026-01-30T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.397607 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf2b577872606cc7792f92f9164c6aec2c2ff2ac1c3c113b0329d0df949b4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d0a0c2d61efd68d3a6b20d7778a325251b8d624cc4bce9cfdc842b8576ba47d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.406917 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64282947-3e36-453a-b460-ada872b157c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:57:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:57:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ptnnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.415797 4875 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db52e26560fd50577cf031d8e81921abdbc497b39bbf3f4734d48c91b96f5a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:56:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnkzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:56:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wgsn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:57:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.493898 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.493941 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.493951 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.493965 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.493975 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:50Z","lastTransitionTime":"2026-01-30T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.596354 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.596404 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.596415 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.596435 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.596446 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:50Z","lastTransitionTime":"2026-01-30T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.698995 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.699037 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.699046 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.699059 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.699068 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:50Z","lastTransitionTime":"2026-01-30T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.802055 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.802104 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.802115 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.802131 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.802141 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:50Z","lastTransitionTime":"2026-01-30T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.904855 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.904892 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.904903 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.904919 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:50 crc kubenswrapper[4875]: I0130 16:57:50.904929 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:50Z","lastTransitionTime":"2026-01-30T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.007403 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.007440 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.007449 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.007463 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.007473 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:51Z","lastTransitionTime":"2026-01-30T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.110435 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.110499 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.110508 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.110522 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.110531 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:51Z","lastTransitionTime":"2026-01-30T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.139319 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:33:53.262531045 +0000 UTC Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.214046 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.214082 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.214090 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.214103 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.214112 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:51Z","lastTransitionTime":"2026-01-30T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.316957 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.316990 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.316998 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.317013 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.317022 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:51Z","lastTransitionTime":"2026-01-30T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.419729 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.419847 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.419872 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.419902 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.419924 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:51Z","lastTransitionTime":"2026-01-30T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.523295 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.523872 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.523929 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.523983 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.524011 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:51Z","lastTransitionTime":"2026-01-30T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.627102 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.627167 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.627185 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.627211 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.627235 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:51Z","lastTransitionTime":"2026-01-30T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.730338 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.730429 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.730470 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.730522 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.730575 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:51Z","lastTransitionTime":"2026-01-30T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.833720 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.833808 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.833845 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.833879 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.833903 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:51Z","lastTransitionTime":"2026-01-30T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.936345 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.936414 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.936434 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.936467 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:51 crc kubenswrapper[4875]: I0130 16:57:51.936487 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:51Z","lastTransitionTime":"2026-01-30T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.040279 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.040321 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.040333 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.040351 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.040363 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:52Z","lastTransitionTime":"2026-01-30T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.135751 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.135821 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.135778 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.135778 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:52 crc kubenswrapper[4875]: E0130 16:57:52.135900 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:52 crc kubenswrapper[4875]: E0130 16:57:52.136041 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:52 crc kubenswrapper[4875]: E0130 16:57:52.136124 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:52 crc kubenswrapper[4875]: E0130 16:57:52.136177 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.139629 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:25:25.439516759 +0000 UTC Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.141847 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.141920 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.141931 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.141948 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.141961 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:52Z","lastTransitionTime":"2026-01-30T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.244186 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.244222 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.244230 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.244242 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.244251 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:52Z","lastTransitionTime":"2026-01-30T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.346893 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.346925 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.346939 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.346961 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.346973 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:52Z","lastTransitionTime":"2026-01-30T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.450222 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.450271 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.450287 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.450313 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.450329 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:52Z","lastTransitionTime":"2026-01-30T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.553408 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.553467 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.553483 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.553506 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.553523 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:52Z","lastTransitionTime":"2026-01-30T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.656763 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.656814 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.656828 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.656851 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.656865 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:52Z","lastTransitionTime":"2026-01-30T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.759952 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.759997 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.760008 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.760025 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.760035 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:52Z","lastTransitionTime":"2026-01-30T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.862423 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.862480 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.862492 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.862512 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.862946 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:52Z","lastTransitionTime":"2026-01-30T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.965445 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.965496 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.965511 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.965530 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:52 crc kubenswrapper[4875]: I0130 16:57:52.965542 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:52Z","lastTransitionTime":"2026-01-30T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.068526 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.068653 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.068671 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.068690 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.068704 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:53Z","lastTransitionTime":"2026-01-30T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.140522 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:14:51.381915477 +0000 UTC Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.171228 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.171283 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.171307 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.171333 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.171352 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:53Z","lastTransitionTime":"2026-01-30T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.273907 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.273953 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.273967 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.273986 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.274001 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:53Z","lastTransitionTime":"2026-01-30T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.377637 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.377673 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.377681 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.377694 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.377702 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:53Z","lastTransitionTime":"2026-01-30T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.481275 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.481328 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.481339 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.481360 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.481372 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:53Z","lastTransitionTime":"2026-01-30T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.583970 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.583998 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.584006 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.584020 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.584030 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:53Z","lastTransitionTime":"2026-01-30T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.685622 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.685654 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.685665 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.685680 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.685692 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:53Z","lastTransitionTime":"2026-01-30T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.788166 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.788206 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.788216 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.788231 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.788239 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:53Z","lastTransitionTime":"2026-01-30T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.861181 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.861328 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:53 crc kubenswrapper[4875]: E0130 16:57:53.861457 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.861373207 +0000 UTC m=+148.408736590 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:57:53 crc kubenswrapper[4875]: E0130 16:57:53.861462 4875 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:57:53 crc kubenswrapper[4875]: E0130 16:57:53.861578 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.861558592 +0000 UTC m=+148.408922025 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.890052 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.890088 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.890096 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.890109 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.890118 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:53Z","lastTransitionTime":"2026-01-30T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.962635 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.962747 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.962790 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:53 crc kubenswrapper[4875]: E0130 16:57:53.962857 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:57:53 crc kubenswrapper[4875]: E0130 16:57:53.962922 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:57:53 crc kubenswrapper[4875]: E0130 16:57:53.962947 4875 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:57:53 crc kubenswrapper[4875]: E0130 16:57:53.962865 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:57:53 crc kubenswrapper[4875]: E0130 16:57:53.962998 4875 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:57:53 crc kubenswrapper[4875]: E0130 16:57:53.962993 4875 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:57:53 crc kubenswrapper[4875]: E0130 16:57:53.963032 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.963004452 +0000 UTC m=+148.510367875 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:57:53 crc kubenswrapper[4875]: E0130 16:57:53.963007 4875 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:57:53 crc kubenswrapper[4875]: E0130 16:57:53.963112 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.963085634 +0000 UTC m=+148.510449087 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:57:53 crc kubenswrapper[4875]: E0130 16:57:53.963137 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.963124285 +0000 UTC m=+148.510487668 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.992260 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.992290 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.992299 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.992314 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:53 crc kubenswrapper[4875]: I0130 16:57:53.992327 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:53Z","lastTransitionTime":"2026-01-30T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.095336 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.095400 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.095423 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.095454 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.095479 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:54Z","lastTransitionTime":"2026-01-30T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.136034 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:54 crc kubenswrapper[4875]: E0130 16:57:54.136239 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.136572 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:54 crc kubenswrapper[4875]: E0130 16:57:54.136753 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.136825 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.136952 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:54 crc kubenswrapper[4875]: E0130 16:57:54.137109 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:54 crc kubenswrapper[4875]: E0130 16:57:54.137197 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.140637 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:41:59.044622794 +0000 UTC Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.198561 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.198630 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.198642 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.198660 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.198672 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:54Z","lastTransitionTime":"2026-01-30T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.301895 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.301956 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.301974 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.302000 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.302018 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:54Z","lastTransitionTime":"2026-01-30T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.404899 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.404972 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.404989 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.405019 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.405043 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:54Z","lastTransitionTime":"2026-01-30T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.507421 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.507502 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.507530 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.507565 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.507642 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:54Z","lastTransitionTime":"2026-01-30T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.611565 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.611695 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.611717 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.611743 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.611764 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:54Z","lastTransitionTime":"2026-01-30T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.715076 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.715143 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.715162 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.715188 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.715206 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:54Z","lastTransitionTime":"2026-01-30T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.817839 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.817914 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.817939 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.817969 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.818031 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:54Z","lastTransitionTime":"2026-01-30T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.920939 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.920995 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.921009 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.921050 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:54 crc kubenswrapper[4875]: I0130 16:57:54.921065 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:54Z","lastTransitionTime":"2026-01-30T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.024357 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.024427 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.024445 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.024470 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.024487 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:55Z","lastTransitionTime":"2026-01-30T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.127617 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.127670 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.127686 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.127707 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.127722 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:55Z","lastTransitionTime":"2026-01-30T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.141594 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:30:29.873385637 +0000 UTC Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.231278 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.231309 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.231319 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.231335 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.231345 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:55Z","lastTransitionTime":"2026-01-30T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.334049 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.334107 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.334120 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.334139 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.334155 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:55Z","lastTransitionTime":"2026-01-30T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.437360 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.437397 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.437405 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.437422 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.437432 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:55Z","lastTransitionTime":"2026-01-30T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.539609 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.539661 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.539671 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.539688 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.539701 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:55Z","lastTransitionTime":"2026-01-30T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.645055 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.645106 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.645535 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.645858 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.645881 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:55Z","lastTransitionTime":"2026-01-30T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.749336 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.749383 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.749393 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.749407 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.749418 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:55Z","lastTransitionTime":"2026-01-30T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.852239 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.852273 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.852286 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.852302 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.852312 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:55Z","lastTransitionTime":"2026-01-30T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.955021 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.955113 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.955132 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.955161 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:55 crc kubenswrapper[4875]: I0130 16:57:55.955182 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:55Z","lastTransitionTime":"2026-01-30T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.058247 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.058302 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.058316 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.058333 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.058343 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:56Z","lastTransitionTime":"2026-01-30T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.135903 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.135903 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.136008 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.136112 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:56 crc kubenswrapper[4875]: E0130 16:57:56.136101 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:56 crc kubenswrapper[4875]: E0130 16:57:56.136256 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:56 crc kubenswrapper[4875]: E0130 16:57:56.136420 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:56 crc kubenswrapper[4875]: E0130 16:57:56.136490 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.141963 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:04:11.905903506 +0000 UTC Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.161436 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.161485 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.161493 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.161512 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.161522 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:56Z","lastTransitionTime":"2026-01-30T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.264153 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.264198 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.264208 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.264227 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.264238 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:56Z","lastTransitionTime":"2026-01-30T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.366448 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.366482 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.366492 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.366508 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.366520 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:56Z","lastTransitionTime":"2026-01-30T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.469021 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.469075 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.469088 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.469107 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.469119 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:56Z","lastTransitionTime":"2026-01-30T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.571451 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.571484 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.571492 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.571506 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.571514 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:56Z","lastTransitionTime":"2026-01-30T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.673850 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.673901 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.673917 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.673940 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.673958 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:56Z","lastTransitionTime":"2026-01-30T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.776798 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.776875 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.776899 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.776931 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.776956 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:56Z","lastTransitionTime":"2026-01-30T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.880086 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.880128 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.880136 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.880153 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.880163 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:56Z","lastTransitionTime":"2026-01-30T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.982434 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.982475 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.982484 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.982503 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:56 crc kubenswrapper[4875]: I0130 16:57:56.982514 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:56Z","lastTransitionTime":"2026-01-30T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.085100 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.085136 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.085146 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.085160 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.085177 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:57Z","lastTransitionTime":"2026-01-30T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.142659 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:49:48.114996994 +0000 UTC Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.188027 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.188066 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.188075 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.188091 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.188102 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:57Z","lastTransitionTime":"2026-01-30T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.290753 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.290783 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.290791 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.290807 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.290818 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:57Z","lastTransitionTime":"2026-01-30T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.394715 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.394760 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.394769 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.394784 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.394793 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:57Z","lastTransitionTime":"2026-01-30T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.497197 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.497249 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.497261 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.497278 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.497291 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:57Z","lastTransitionTime":"2026-01-30T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.600816 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.600919 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.600931 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.600949 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.600962 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:57Z","lastTransitionTime":"2026-01-30T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.704076 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.704190 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.704211 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.704245 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.704269 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:57Z","lastTransitionTime":"2026-01-30T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.806992 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.807038 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.807058 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.807080 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.807095 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:57Z","lastTransitionTime":"2026-01-30T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.910013 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.910059 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.910070 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.910088 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:57 crc kubenswrapper[4875]: I0130 16:57:57.910100 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:57Z","lastTransitionTime":"2026-01-30T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.012886 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.012981 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.012991 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.013007 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.013017 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:58Z","lastTransitionTime":"2026-01-30T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.115752 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.115829 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.115852 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.116376 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.116435 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:58Z","lastTransitionTime":"2026-01-30T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.135305 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.135365 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.135335 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:57:58 crc kubenswrapper[4875]: E0130 16:57:58.135509 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:57:58 crc kubenswrapper[4875]: E0130 16:57:58.135674 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.135978 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:57:58 crc kubenswrapper[4875]: E0130 16:57:58.136035 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:57:58 crc kubenswrapper[4875]: E0130 16:57:58.136215 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.136270 4875 scope.go:117] "RemoveContainer" containerID="41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c" Jan 30 16:57:58 crc kubenswrapper[4875]: E0130 16:57:58.136411 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.143443 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:27:48.888705178 +0000 UTC Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.219801 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.219854 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.219870 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.219892 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.219907 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:58Z","lastTransitionTime":"2026-01-30T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.322146 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.322182 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.322192 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.322207 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.322216 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:58Z","lastTransitionTime":"2026-01-30T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.424575 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.424629 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.424639 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.424653 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.424663 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:58Z","lastTransitionTime":"2026-01-30T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.526985 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.527030 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.527041 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.527059 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.527072 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:58Z","lastTransitionTime":"2026-01-30T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.630096 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.630134 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.630178 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.630199 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.630212 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:58Z","lastTransitionTime":"2026-01-30T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.733322 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.733397 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.733415 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.733443 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.733462 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:58Z","lastTransitionTime":"2026-01-30T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.788623 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.788673 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.788686 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.788705 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.788719 4875 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:57:58Z","lastTransitionTime":"2026-01-30T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.857720 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz"] Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.858273 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.861002 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.861526 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.861734 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.861534 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.918322 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-rzl5h" podStartSLOduration=68.918294523 podStartE2EDuration="1m8.918294523s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:57:58.901730344 +0000 UTC m=+89.449093757" watchObservedRunningTime="2026-01-30 16:57:58.918294523 +0000 UTC m=+89.465657916" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.919234 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ck4hq" podStartSLOduration=68.919227127 podStartE2EDuration="1m8.919227127s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:57:58.919182636 +0000 UTC m=+89.466546039" watchObservedRunningTime="2026-01-30 16:57:58.919227127 +0000 UTC m=+89.466590510" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.924008 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c253aa3-2658-445f-ada0-e3434f083c50-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.924082 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c253aa3-2658-445f-ada0-e3434f083c50-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.924113 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c253aa3-2658-445f-ada0-e3434f083c50-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.924136 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8c253aa3-2658-445f-ada0-e3434f083c50-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.924198 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8c253aa3-2658-445f-ada0-e3434f083c50-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.934365 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5rzl2" podStartSLOduration=68.934341589 podStartE2EDuration="1m8.934341589s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:57:58.934163604 +0000 UTC m=+89.481526997" watchObservedRunningTime="2026-01-30 16:57:58.934341589 +0000 UTC m=+89.481704982" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.951502 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.951479443 podStartE2EDuration="1m8.951479443s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:57:58.951475533 +0000 UTC m=+89.498838916" watchObservedRunningTime="2026-01-30 16:57:58.951479443 +0000 UTC m=+89.498842826" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.975368 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9nnzd" podStartSLOduration=68.975345711 podStartE2EDuration="1m8.975345711s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:57:58.975029193 +0000 UTC m=+89.522392576" watchObservedRunningTime="2026-01-30 16:57:58.975345711 +0000 UTC m=+89.522709094" Jan 30 16:57:58 crc kubenswrapper[4875]: I0130 16:57:58.991251 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-hqmqg" podStartSLOduration=68.991233234 podStartE2EDuration="1m8.991233234s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:57:58.991167662 +0000 UTC m=+89.538531045" watchObservedRunningTime="2026-01-30 16:57:58.991233234 +0000 UTC m=+89.538596627" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.024748 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c253aa3-2658-445f-ada0-e3434f083c50-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.025694 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c253aa3-2658-445f-ada0-e3434f083c50-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.025720 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c253aa3-2658-445f-ada0-e3434f083c50-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.025744 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8c253aa3-2658-445f-ada0-e3434f083c50-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.025779 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8c253aa3-2658-445f-ada0-e3434f083c50-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.025870 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8c253aa3-2658-445f-ada0-e3434f083c50-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.026034 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8c253aa3-2658-445f-ada0-e3434f083c50-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.026717 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c253aa3-2658-445f-ada0-e3434f083c50-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.032321 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.032301608 podStartE2EDuration="1m9.032301608s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:57:59.031800095 +0000 UTC m=+89.579163478" watchObservedRunningTime="2026-01-30 16:57:59.032301608 +0000 UTC m=+89.579664991" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.037544 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c253aa3-2658-445f-ada0-e3434f083c50-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.043860 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c253aa3-2658-445f-ada0-e3434f083c50-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wwnpz\" (UID: \"8c253aa3-2658-445f-ada0-e3434f083c50\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.045432 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=34.045421428 podStartE2EDuration="34.045421428s" podCreationTimestamp="2026-01-30 16:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:57:59.045264154 +0000 UTC m=+89.592627537" watchObservedRunningTime="2026-01-30 16:57:59.045421428 +0000 UTC m=+89.592784811" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.055139 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.055112009 podStartE2EDuration="15.055112009s" podCreationTimestamp="2026-01-30 16:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:57:59.05476875 +0000 UTC m=+89.602132143" watchObservedRunningTime="2026-01-30 16:57:59.055112009 +0000 UTC m=+89.602475392" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.084222 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=66.084193723 podStartE2EDuration="1m6.084193723s" podCreationTimestamp="2026-01-30 16:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:57:59.083201018 +0000 UTC m=+89.630564411" watchObservedRunningTime="2026-01-30 16:57:59.084193723 +0000 UTC m=+89.631557106" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.143778 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:26:39.925279162 +0000 UTC Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.143908 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.164534 4875 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.184186 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.231208 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podStartSLOduration=69.231185643 podStartE2EDuration="1m9.231185643s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:57:59.217455187 +0000 UTC m=+89.764818570" watchObservedRunningTime="2026-01-30 16:57:59.231185643 +0000 UTC m=+89.778549026" Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.635980 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" event={"ID":"8c253aa3-2658-445f-ada0-e3434f083c50","Type":"ContainerStarted","Data":"880cb260a889602e4b60fe154dfe13a73a4cfd7926aacd0c75f1c8df92bb80ce"} Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.636041 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" event={"ID":"8c253aa3-2658-445f-ada0-e3434f083c50","Type":"ContainerStarted","Data":"66b2782a95dccd8121881d79b0b750414c8c4e5d6c350974ae869de0cd6d0771"} Jan 30 16:57:59 crc kubenswrapper[4875]: I0130 16:57:59.654445 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wwnpz" podStartSLOduration=69.654422934 podStartE2EDuration="1m9.654422934s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:57:59.654342481 +0000 UTC m=+90.201705904" watchObservedRunningTime="2026-01-30 16:57:59.654422934 +0000 UTC m=+90.201786317" Jan 30 16:58:00 crc kubenswrapper[4875]: I0130 16:58:00.135199 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:00 crc kubenswrapper[4875]: I0130 16:58:00.135293 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:00 crc kubenswrapper[4875]: I0130 16:58:00.135366 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:00 crc kubenswrapper[4875]: I0130 16:58:00.136444 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:00 crc kubenswrapper[4875]: E0130 16:58:00.136453 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:00 crc kubenswrapper[4875]: E0130 16:58:00.136700 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:00 crc kubenswrapper[4875]: E0130 16:58:00.136832 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:00 crc kubenswrapper[4875]: E0130 16:58:00.136938 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:02 crc kubenswrapper[4875]: I0130 16:58:02.135950 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:02 crc kubenswrapper[4875]: I0130 16:58:02.136029 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:02 crc kubenswrapper[4875]: E0130 16:58:02.136126 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:02 crc kubenswrapper[4875]: I0130 16:58:02.136183 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:02 crc kubenswrapper[4875]: E0130 16:58:02.136289 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:02 crc kubenswrapper[4875]: E0130 16:58:02.136416 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:02 crc kubenswrapper[4875]: I0130 16:58:02.136764 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:02 crc kubenswrapper[4875]: E0130 16:58:02.136950 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:04 crc kubenswrapper[4875]: I0130 16:58:04.134957 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:04 crc kubenswrapper[4875]: I0130 16:58:04.134988 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:04 crc kubenswrapper[4875]: I0130 16:58:04.135018 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:04 crc kubenswrapper[4875]: I0130 16:58:04.135190 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:04 crc kubenswrapper[4875]: E0130 16:58:04.136430 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:04 crc kubenswrapper[4875]: E0130 16:58:04.136522 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:04 crc kubenswrapper[4875]: E0130 16:58:04.136686 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:04 crc kubenswrapper[4875]: E0130 16:58:04.136384 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:06 crc kubenswrapper[4875]: I0130 16:58:06.135041 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:06 crc kubenswrapper[4875]: E0130 16:58:06.135686 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:06 crc kubenswrapper[4875]: I0130 16:58:06.136094 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:06 crc kubenswrapper[4875]: E0130 16:58:06.136278 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:06 crc kubenswrapper[4875]: I0130 16:58:06.136405 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:06 crc kubenswrapper[4875]: E0130 16:58:06.136550 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:06 crc kubenswrapper[4875]: I0130 16:58:06.136698 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:06 crc kubenswrapper[4875]: E0130 16:58:06.137009 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:08 crc kubenswrapper[4875]: I0130 16:58:08.136054 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:08 crc kubenswrapper[4875]: I0130 16:58:08.136119 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:08 crc kubenswrapper[4875]: I0130 16:58:08.136193 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:08 crc kubenswrapper[4875]: E0130 16:58:08.136248 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:08 crc kubenswrapper[4875]: E0130 16:58:08.136340 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:08 crc kubenswrapper[4875]: I0130 16:58:08.136090 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:08 crc kubenswrapper[4875]: E0130 16:58:08.136471 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:08 crc kubenswrapper[4875]: E0130 16:58:08.136574 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:08 crc kubenswrapper[4875]: I0130 16:58:08.531540 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:08 crc kubenswrapper[4875]: E0130 16:58:08.531918 4875 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:58:08 crc kubenswrapper[4875]: E0130 16:58:08.532025 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs podName:64282947-3e36-453a-b460-ada872b157c9 nodeName:}" failed. No retries permitted until 2026-01-30 16:59:12.531999859 +0000 UTC m=+163.079363332 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs") pod "network-metrics-daemon-ptnnq" (UID: "64282947-3e36-453a-b460-ada872b157c9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:58:10 crc kubenswrapper[4875]: I0130 16:58:10.135509 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:10 crc kubenswrapper[4875]: I0130 16:58:10.135737 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:10 crc kubenswrapper[4875]: I0130 16:58:10.135856 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:10 crc kubenswrapper[4875]: E0130 16:58:10.136742 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:10 crc kubenswrapper[4875]: I0130 16:58:10.136854 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:10 crc kubenswrapper[4875]: E0130 16:58:10.136887 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:10 crc kubenswrapper[4875]: E0130 16:58:10.137167 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:10 crc kubenswrapper[4875]: E0130 16:58:10.137341 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:11 crc kubenswrapper[4875]: I0130 16:58:11.137424 4875 scope.go:117] "RemoveContainer" containerID="41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c" Jan 30 16:58:11 crc kubenswrapper[4875]: E0130 16:58:11.138012 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" Jan 30 16:58:12 crc kubenswrapper[4875]: I0130 16:58:12.135310 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:12 crc kubenswrapper[4875]: I0130 16:58:12.135324 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:12 crc kubenswrapper[4875]: I0130 16:58:12.135423 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:12 crc kubenswrapper[4875]: E0130 16:58:12.135630 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:12 crc kubenswrapper[4875]: I0130 16:58:12.135885 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:12 crc kubenswrapper[4875]: E0130 16:58:12.136033 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:12 crc kubenswrapper[4875]: E0130 16:58:12.136189 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:12 crc kubenswrapper[4875]: E0130 16:58:12.137998 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:14 crc kubenswrapper[4875]: I0130 16:58:14.135978 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:14 crc kubenswrapper[4875]: I0130 16:58:14.136035 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:14 crc kubenswrapper[4875]: I0130 16:58:14.135992 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:14 crc kubenswrapper[4875]: I0130 16:58:14.135994 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:14 crc kubenswrapper[4875]: E0130 16:58:14.136121 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:14 crc kubenswrapper[4875]: E0130 16:58:14.136265 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:14 crc kubenswrapper[4875]: E0130 16:58:14.136348 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:14 crc kubenswrapper[4875]: E0130 16:58:14.136413 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:16 crc kubenswrapper[4875]: I0130 16:58:16.135966 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:16 crc kubenswrapper[4875]: I0130 16:58:16.136079 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:16 crc kubenswrapper[4875]: I0130 16:58:16.136370 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:16 crc kubenswrapper[4875]: E0130 16:58:16.136376 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:16 crc kubenswrapper[4875]: I0130 16:58:16.136511 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:16 crc kubenswrapper[4875]: E0130 16:58:16.136510 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:16 crc kubenswrapper[4875]: E0130 16:58:16.136564 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:16 crc kubenswrapper[4875]: E0130 16:58:16.136785 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:18 crc kubenswrapper[4875]: I0130 16:58:18.135838 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:18 crc kubenswrapper[4875]: I0130 16:58:18.135898 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:18 crc kubenswrapper[4875]: I0130 16:58:18.135972 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:18 crc kubenswrapper[4875]: I0130 16:58:18.135859 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:18 crc kubenswrapper[4875]: E0130 16:58:18.136069 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:18 crc kubenswrapper[4875]: E0130 16:58:18.136357 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:18 crc kubenswrapper[4875]: E0130 16:58:18.136436 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:18 crc kubenswrapper[4875]: E0130 16:58:18.136390 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:20 crc kubenswrapper[4875]: I0130 16:58:20.135740 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:20 crc kubenswrapper[4875]: I0130 16:58:20.135795 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:20 crc kubenswrapper[4875]: I0130 16:58:20.135907 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:20 crc kubenswrapper[4875]: E0130 16:58:20.136704 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:20 crc kubenswrapper[4875]: I0130 16:58:20.136757 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:20 crc kubenswrapper[4875]: E0130 16:58:20.136863 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:20 crc kubenswrapper[4875]: E0130 16:58:20.136960 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:20 crc kubenswrapper[4875]: E0130 16:58:20.137338 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:22 crc kubenswrapper[4875]: I0130 16:58:22.135291 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:22 crc kubenswrapper[4875]: I0130 16:58:22.135351 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:22 crc kubenswrapper[4875]: E0130 16:58:22.135492 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:22 crc kubenswrapper[4875]: I0130 16:58:22.135766 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:22 crc kubenswrapper[4875]: I0130 16:58:22.135858 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:22 crc kubenswrapper[4875]: E0130 16:58:22.135970 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:22 crc kubenswrapper[4875]: E0130 16:58:22.136142 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:22 crc kubenswrapper[4875]: E0130 16:58:22.136194 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:24 crc kubenswrapper[4875]: I0130 16:58:24.135407 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:24 crc kubenswrapper[4875]: I0130 16:58:24.135463 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:24 crc kubenswrapper[4875]: E0130 16:58:24.136287 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:24 crc kubenswrapper[4875]: I0130 16:58:24.135641 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:24 crc kubenswrapper[4875]: E0130 16:58:24.136350 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:24 crc kubenswrapper[4875]: I0130 16:58:24.135539 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:24 crc kubenswrapper[4875]: E0130 16:58:24.136393 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:24 crc kubenswrapper[4875]: E0130 16:58:24.136465 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:24 crc kubenswrapper[4875]: I0130 16:58:24.136853 4875 scope.go:117] "RemoveContainer" containerID="41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c" Jan 30 16:58:24 crc kubenswrapper[4875]: E0130 16:58:24.137186 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mps6c_openshift-ovn-kubernetes(85cf29f6-017d-475a-b63c-cd1cab3c8132)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" Jan 30 16:58:24 crc kubenswrapper[4875]: I0130 16:58:24.728218 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ck4hq_562b7bc8-0631-497c-9b8a-05af82dcfff9/kube-multus/1.log" Jan 30 16:58:24 crc kubenswrapper[4875]: I0130 16:58:24.728803 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ck4hq_562b7bc8-0631-497c-9b8a-05af82dcfff9/kube-multus/0.log" Jan 30 16:58:24 crc kubenswrapper[4875]: I0130 16:58:24.728848 4875 generic.go:334] "Generic (PLEG): container finished" podID="562b7bc8-0631-497c-9b8a-05af82dcfff9" containerID="3b26a1f922e0214d976c84feb63e7ad8957d0d356ff5287eb78b1a6eaf4564ac" exitCode=1 Jan 30 16:58:24 crc kubenswrapper[4875]: I0130 16:58:24.728882 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ck4hq" event={"ID":"562b7bc8-0631-497c-9b8a-05af82dcfff9","Type":"ContainerDied","Data":"3b26a1f922e0214d976c84feb63e7ad8957d0d356ff5287eb78b1a6eaf4564ac"} Jan 30 16:58:24 crc kubenswrapper[4875]: I0130 16:58:24.728919 4875 scope.go:117] "RemoveContainer" containerID="3e0600e5a37ac5dcd1bf728c4e96c34da1032ab25fff6f41f7edd93cfff1a32a" Jan 30 16:58:24 crc kubenswrapper[4875]: I0130 16:58:24.729685 4875 scope.go:117] "RemoveContainer" containerID="3b26a1f922e0214d976c84feb63e7ad8957d0d356ff5287eb78b1a6eaf4564ac" Jan 30 16:58:24 crc kubenswrapper[4875]: E0130 16:58:24.730167 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ck4hq_openshift-multus(562b7bc8-0631-497c-9b8a-05af82dcfff9)\"" pod="openshift-multus/multus-ck4hq" podUID="562b7bc8-0631-497c-9b8a-05af82dcfff9" Jan 30 16:58:25 crc kubenswrapper[4875]: I0130 16:58:25.734833 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ck4hq_562b7bc8-0631-497c-9b8a-05af82dcfff9/kube-multus/1.log" Jan 30 16:58:26 crc kubenswrapper[4875]: I0130 16:58:26.135122 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:26 crc kubenswrapper[4875]: I0130 16:58:26.135190 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:26 crc kubenswrapper[4875]: I0130 16:58:26.135213 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:26 crc kubenswrapper[4875]: I0130 16:58:26.135122 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:26 crc kubenswrapper[4875]: E0130 16:58:26.135373 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:26 crc kubenswrapper[4875]: E0130 16:58:26.135515 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:26 crc kubenswrapper[4875]: E0130 16:58:26.135856 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:26 crc kubenswrapper[4875]: E0130 16:58:26.136008 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:28 crc kubenswrapper[4875]: I0130 16:58:28.135355 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:28 crc kubenswrapper[4875]: E0130 16:58:28.135568 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:28 crc kubenswrapper[4875]: I0130 16:58:28.135946 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:28 crc kubenswrapper[4875]: I0130 16:58:28.135960 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:28 crc kubenswrapper[4875]: I0130 16:58:28.136100 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:28 crc kubenswrapper[4875]: E0130 16:58:28.136176 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:28 crc kubenswrapper[4875]: E0130 16:58:28.136333 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:28 crc kubenswrapper[4875]: E0130 16:58:28.136620 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:30 crc kubenswrapper[4875]: E0130 16:58:30.096983 4875 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 16:58:30 crc kubenswrapper[4875]: I0130 16:58:30.135161 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:30 crc kubenswrapper[4875]: I0130 16:58:30.135196 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:30 crc kubenswrapper[4875]: I0130 16:58:30.135383 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:30 crc kubenswrapper[4875]: E0130 16:58:30.136270 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:30 crc kubenswrapper[4875]: I0130 16:58:30.136331 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:30 crc kubenswrapper[4875]: E0130 16:58:30.136423 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:30 crc kubenswrapper[4875]: E0130 16:58:30.136500 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:30 crc kubenswrapper[4875]: E0130 16:58:30.136726 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:30 crc kubenswrapper[4875]: E0130 16:58:30.239368 4875 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:58:32 crc kubenswrapper[4875]: I0130 16:58:32.135829 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:32 crc kubenswrapper[4875]: I0130 16:58:32.135982 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:32 crc kubenswrapper[4875]: I0130 16:58:32.135857 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:32 crc kubenswrapper[4875]: E0130 16:58:32.136072 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:32 crc kubenswrapper[4875]: I0130 16:58:32.136126 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:32 crc kubenswrapper[4875]: E0130 16:58:32.136547 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:32 crc kubenswrapper[4875]: E0130 16:58:32.136386 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:32 crc kubenswrapper[4875]: E0130 16:58:32.136742 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:34 crc kubenswrapper[4875]: I0130 16:58:34.135881 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:34 crc kubenswrapper[4875]: I0130 16:58:34.135982 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:34 crc kubenswrapper[4875]: E0130 16:58:34.136462 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:34 crc kubenswrapper[4875]: I0130 16:58:34.136041 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:34 crc kubenswrapper[4875]: I0130 16:58:34.135982 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:34 crc kubenswrapper[4875]: E0130 16:58:34.136554 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:34 crc kubenswrapper[4875]: E0130 16:58:34.136687 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:34 crc kubenswrapper[4875]: E0130 16:58:34.136791 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:35 crc kubenswrapper[4875]: E0130 16:58:35.241127 4875 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:58:36 crc kubenswrapper[4875]: I0130 16:58:36.135936 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:36 crc kubenswrapper[4875]: I0130 16:58:36.136013 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:36 crc kubenswrapper[4875]: E0130 16:58:36.136082 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:36 crc kubenswrapper[4875]: I0130 16:58:36.135938 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:36 crc kubenswrapper[4875]: I0130 16:58:36.135940 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:36 crc kubenswrapper[4875]: E0130 16:58:36.136254 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:36 crc kubenswrapper[4875]: E0130 16:58:36.136449 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:36 crc kubenswrapper[4875]: E0130 16:58:36.136529 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:38 crc kubenswrapper[4875]: I0130 16:58:38.135910 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:38 crc kubenswrapper[4875]: I0130 16:58:38.135982 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:38 crc kubenswrapper[4875]: I0130 16:58:38.136017 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:38 crc kubenswrapper[4875]: I0130 16:58:38.135932 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:38 crc kubenswrapper[4875]: E0130 16:58:38.136105 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:38 crc kubenswrapper[4875]: E0130 16:58:38.136188 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:38 crc kubenswrapper[4875]: E0130 16:58:38.136287 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:38 crc kubenswrapper[4875]: E0130 16:58:38.136375 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:39 crc kubenswrapper[4875]: I0130 16:58:39.136504 4875 scope.go:117] "RemoveContainer" containerID="41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c" Jan 30 16:58:39 crc kubenswrapper[4875]: I0130 16:58:39.784361 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/3.log" Jan 30 16:58:39 crc kubenswrapper[4875]: I0130 16:58:39.787027 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerStarted","Data":"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260"} Jan 30 16:58:39 crc kubenswrapper[4875]: I0130 16:58:39.787370 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:58:39 crc kubenswrapper[4875]: I0130 16:58:39.936857 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podStartSLOduration=109.93682882 podStartE2EDuration="1m49.93682882s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:39.814991561 +0000 UTC m=+130.362354954" watchObservedRunningTime="2026-01-30 16:58:39.93682882 +0000 UTC m=+130.484192243" Jan 30 16:58:39 crc kubenswrapper[4875]: I0130 16:58:39.938274 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ptnnq"] Jan 30 16:58:39 crc kubenswrapper[4875]: I0130 16:58:39.938406 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:39 crc kubenswrapper[4875]: E0130 16:58:39.938553 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:40 crc kubenswrapper[4875]: I0130 16:58:40.135434 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:40 crc kubenswrapper[4875]: I0130 16:58:40.135426 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:40 crc kubenswrapper[4875]: I0130 16:58:40.137714 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:40 crc kubenswrapper[4875]: E0130 16:58:40.137871 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:40 crc kubenswrapper[4875]: E0130 16:58:40.137978 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:40 crc kubenswrapper[4875]: E0130 16:58:40.138095 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:40 crc kubenswrapper[4875]: I0130 16:58:40.138122 4875 scope.go:117] "RemoveContainer" containerID="3b26a1f922e0214d976c84feb63e7ad8957d0d356ff5287eb78b1a6eaf4564ac" Jan 30 16:58:40 crc kubenswrapper[4875]: E0130 16:58:40.241668 4875 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:58:40 crc kubenswrapper[4875]: I0130 16:58:40.791722 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ck4hq_562b7bc8-0631-497c-9b8a-05af82dcfff9/kube-multus/1.log" Jan 30 16:58:40 crc kubenswrapper[4875]: I0130 16:58:40.791813 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ck4hq" event={"ID":"562b7bc8-0631-497c-9b8a-05af82dcfff9","Type":"ContainerStarted","Data":"62c943c842d51e922bb22248b6399f5410f8500f6276b2f741a1e5b35ad9a256"} Jan 30 16:58:42 crc kubenswrapper[4875]: I0130 16:58:42.135708 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:42 crc kubenswrapper[4875]: I0130 16:58:42.135711 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:42 crc kubenswrapper[4875]: I0130 16:58:42.135722 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:42 crc kubenswrapper[4875]: I0130 16:58:42.135881 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:42 crc kubenswrapper[4875]: E0130 16:58:42.136125 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:42 crc kubenswrapper[4875]: E0130 16:58:42.136448 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:42 crc kubenswrapper[4875]: E0130 16:58:42.136655 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:42 crc kubenswrapper[4875]: E0130 16:58:42.136951 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:44 crc kubenswrapper[4875]: I0130 16:58:44.743261 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:44 crc kubenswrapper[4875]: I0130 16:58:44.743393 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:44 crc kubenswrapper[4875]: E0130 16:58:44.744786 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:58:44 crc kubenswrapper[4875]: I0130 16:58:44.743631 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:44 crc kubenswrapper[4875]: E0130 16:58:44.744837 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ptnnq" podUID="64282947-3e36-453a-b460-ada872b157c9" Jan 30 16:58:44 crc kubenswrapper[4875]: I0130 16:58:44.743435 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:44 crc kubenswrapper[4875]: E0130 16:58:44.744907 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:58:44 crc kubenswrapper[4875]: E0130 16:58:44.745101 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:58:45 crc kubenswrapper[4875]: I0130 16:58:45.364265 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 16:58:46 crc kubenswrapper[4875]: I0130 16:58:46.135572 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:46 crc kubenswrapper[4875]: I0130 16:58:46.135668 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:46 crc kubenswrapper[4875]: I0130 16:58:46.135689 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:46 crc kubenswrapper[4875]: I0130 16:58:46.135627 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:58:46 crc kubenswrapper[4875]: I0130 16:58:46.138539 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 16:58:46 crc kubenswrapper[4875]: I0130 16:58:46.138573 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 16:58:46 crc kubenswrapper[4875]: I0130 16:58:46.138752 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 16:58:46 crc kubenswrapper[4875]: I0130 16:58:46.139305 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 16:58:46 crc kubenswrapper[4875]: I0130 16:58:46.139382 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 16:58:46 crc kubenswrapper[4875]: I0130 16:58:46.142390 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.709999 4875 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.775874 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-j2q7s"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.776314 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.778846 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.778875 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.779387 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.779607 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2qrng"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.780354 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.780479 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.780609 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.780853 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.780988 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.781451 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.785744 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.786518 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.786889 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.786937 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.786942 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.787015 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qtgzv"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.787628 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.788767 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-serving-cert\") pod \"route-controller-manager-6576b87f9c-m6fdf\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.788794 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwh5p\" (UniqueName: \"kubernetes.io/projected/3fedc583-ecaa-4f4a-842b-f5276040b18c-kube-api-access-bwh5p\") pod \"openshift-apiserver-operator-796bbdcf4f-wxc56\" (UID: \"3fedc583-ecaa-4f4a-842b-f5276040b18c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.788813 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/56f1b088-2293-4064-b76b-40b9bc9ef3d5-images\") pod \"machine-api-operator-5694c8668f-j2q7s\" (UID: \"56f1b088-2293-4064-b76b-40b9bc9ef3d5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.788835 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsl2v\" (UniqueName: \"kubernetes.io/projected/56f1b088-2293-4064-b76b-40b9bc9ef3d5-kube-api-access-fsl2v\") pod \"machine-api-operator-5694c8668f-j2q7s\" (UID: \"56f1b088-2293-4064-b76b-40b9bc9ef3d5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.788854 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f1b088-2293-4064-b76b-40b9bc9ef3d5-config\") pod \"machine-api-operator-5694c8668f-j2q7s\" (UID: \"56f1b088-2293-4064-b76b-40b9bc9ef3d5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.788884 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fedc583-ecaa-4f4a-842b-f5276040b18c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wxc56\" (UID: \"3fedc583-ecaa-4f4a-842b-f5276040b18c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.788954 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f4d2781f-afa7-44e3-967b-08aaea623583-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2qrng\" (UID: \"f4d2781f-afa7-44e3-967b-08aaea623583\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.788998 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-client-ca\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789049 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-client-ca\") pod \"route-controller-manager-6576b87f9c-m6fdf\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789080 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/56f1b088-2293-4064-b76b-40b9bc9ef3d5-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-j2q7s\" (UID: \"56f1b088-2293-4064-b76b-40b9bc9ef3d5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789108 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789157 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dthvk\" (UniqueName: \"kubernetes.io/projected/f4d2781f-afa7-44e3-967b-08aaea623583-kube-api-access-dthvk\") pod \"openshift-config-operator-7777fb866f-2qrng\" (UID: \"f4d2781f-afa7-44e3-967b-08aaea623583\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789181 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-config\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789207 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fedc583-ecaa-4f4a-842b-f5276040b18c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wxc56\" (UID: \"3fedc583-ecaa-4f4a-842b-f5276040b18c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789387 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dn8g\" (UniqueName: \"kubernetes.io/projected/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-kube-api-access-2dn8g\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789449 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf77x\" (UniqueName: \"kubernetes.io/projected/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-kube-api-access-bf77x\") pod \"route-controller-manager-6576b87f9c-m6fdf\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789464 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789484 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4d2781f-afa7-44e3-967b-08aaea623583-serving-cert\") pod \"openshift-config-operator-7777fb866f-2qrng\" (UID: \"f4d2781f-afa7-44e3-967b-08aaea623583\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789528 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-serving-cert\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789561 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-config\") pod \"route-controller-manager-6576b87f9c-m6fdf\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789726 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789808 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789875 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789922 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gtwl2"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789996 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.789895 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.790182 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.790430 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.790538 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.790809 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.790540 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.791106 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.791314 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.791415 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.791435 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gv6jw"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.792649 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.792683 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.792934 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.795885 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.796741 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.796799 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.797303 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.797425 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.797656 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.797919 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.798041 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.798082 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.798046 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.798786 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.798863 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.799068 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.799109 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.799320 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.799540 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.799745 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.803825 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.805508 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.813231 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2d4sj"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.813869 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-7s4zv"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.814098 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.814291 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.814759 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.814867 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.815144 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.815402 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.815774 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.817495 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ht6ll"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.818132 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ht6ll" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.821286 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.821539 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.823263 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.823700 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.824061 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.840927 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.841695 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.842208 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.851656 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.884441 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.884624 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.884723 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.884976 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vcs72"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.885448 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.885505 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.885536 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.885985 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.886022 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.886114 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.886138 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.886192 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.886293 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.886496 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.886605 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.886718 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.886747 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.886932 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.887084 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.887156 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.887191 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.887235 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.887283 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.887314 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.887377 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.887391 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.851754 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.887474 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890290 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-etcd-client\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890321 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a47f63-146d-4621-8bd2-fdb469f0fc8a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890339 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjszw\" (UniqueName: \"kubernetes.io/projected/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-kube-api-access-kjszw\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890359 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f4d2781f-afa7-44e3-967b-08aaea623583-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2qrng\" (UID: \"f4d2781f-afa7-44e3-967b-08aaea623583\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890377 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhppt\" (UniqueName: \"kubernetes.io/projected/bfaa9666-5e7d-4a64-8bc5-1936748f9375-kube-api-access-lhppt\") pod \"cluster-samples-operator-665b6dd947-9mhw2\" (UID: \"bfaa9666-5e7d-4a64-8bc5-1936748f9375\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890395 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-client-ca\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890421 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-client-ca\") pod \"route-controller-manager-6576b87f9c-m6fdf\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890436 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/56f1b088-2293-4064-b76b-40b9bc9ef3d5-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-j2q7s\" (UID: \"56f1b088-2293-4064-b76b-40b9bc9ef3d5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890451 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-audit-dir\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890466 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890488 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d01c20ec-32e4-4ffe-af84-a7e75df66733-config\") pod \"kube-controller-manager-operator-78b949d7b-h8sjn\" (UID: \"d01c20ec-32e4-4ffe-af84-a7e75df66733\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890503 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-audit-policies\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890525 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dthvk\" (UniqueName: \"kubernetes.io/projected/f4d2781f-afa7-44e3-967b-08aaea623583-kube-api-access-dthvk\") pod \"openshift-config-operator-7777fb866f-2qrng\" (UID: \"f4d2781f-afa7-44e3-967b-08aaea623583\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890540 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-config\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890555 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fedc583-ecaa-4f4a-842b-f5276040b18c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wxc56\" (UID: \"3fedc583-ecaa-4f4a-842b-f5276040b18c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.890572 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vrm9\" (UniqueName: \"kubernetes.io/projected/50a47f63-146d-4621-8bd2-fdb469f0fc8a-kube-api-access-2vrm9\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891657 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dn8g\" (UniqueName: \"kubernetes.io/projected/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-kube-api-access-2dn8g\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891702 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-serving-cert\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891718 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d01c20ec-32e4-4ffe-af84-a7e75df66733-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h8sjn\" (UID: \"d01c20ec-32e4-4ffe-af84-a7e75df66733\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891733 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a47f63-146d-4621-8bd2-fdb469f0fc8a-config\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891753 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf77x\" (UniqueName: \"kubernetes.io/projected/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-kube-api-access-bf77x\") pod \"route-controller-manager-6576b87f9c-m6fdf\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891770 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4d2781f-afa7-44e3-967b-08aaea623583-serving-cert\") pod \"openshift-config-operator-7777fb866f-2qrng\" (UID: \"f4d2781f-afa7-44e3-967b-08aaea623583\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891786 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-serving-cert\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891804 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-config\") pod \"route-controller-manager-6576b87f9c-m6fdf\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891821 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-serving-cert\") pod \"route-controller-manager-6576b87f9c-m6fdf\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891837 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwh5p\" (UniqueName: \"kubernetes.io/projected/3fedc583-ecaa-4f4a-842b-f5276040b18c-kube-api-access-bwh5p\") pod \"openshift-apiserver-operator-796bbdcf4f-wxc56\" (UID: \"3fedc583-ecaa-4f4a-842b-f5276040b18c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891856 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/56f1b088-2293-4064-b76b-40b9bc9ef3d5-images\") pod \"machine-api-operator-5694c8668f-j2q7s\" (UID: \"56f1b088-2293-4064-b76b-40b9bc9ef3d5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891886 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891914 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50a47f63-146d-4621-8bd2-fdb469f0fc8a-serving-cert\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891930 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a47f63-146d-4621-8bd2-fdb469f0fc8a-service-ca-bundle\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891949 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsl2v\" (UniqueName: \"kubernetes.io/projected/56f1b088-2293-4064-b76b-40b9bc9ef3d5-kube-api-access-fsl2v\") pod \"machine-api-operator-5694c8668f-j2q7s\" (UID: \"56f1b088-2293-4064-b76b-40b9bc9ef3d5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891966 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f1b088-2293-4064-b76b-40b9bc9ef3d5-config\") pod \"machine-api-operator-5694c8668f-j2q7s\" (UID: \"56f1b088-2293-4064-b76b-40b9bc9ef3d5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.891984 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfaa9666-5e7d-4a64-8bc5-1936748f9375-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9mhw2\" (UID: \"bfaa9666-5e7d-4a64-8bc5-1936748f9375\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.892010 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d01c20ec-32e4-4ffe-af84-a7e75df66733-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h8sjn\" (UID: \"d01c20ec-32e4-4ffe-af84-a7e75df66733\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.892025 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.892040 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-encryption-config\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.892061 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fedc583-ecaa-4f4a-842b-f5276040b18c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wxc56\" (UID: \"3fedc583-ecaa-4f4a-842b-f5276040b18c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.894333 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f4d2781f-afa7-44e3-967b-08aaea623583-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2qrng\" (UID: \"f4d2781f-afa7-44e3-967b-08aaea623583\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.894764 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.894997 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.897302 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-client-ca\") pod \"route-controller-manager-6576b87f9c-m6fdf\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.898966 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-config\") pod \"route-controller-manager-6576b87f9c-m6fdf\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.899606 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fedc583-ecaa-4f4a-842b-f5276040b18c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wxc56\" (UID: \"3fedc583-ecaa-4f4a-842b-f5276040b18c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.900334 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56f1b088-2293-4064-b76b-40b9bc9ef3d5-config\") pod \"machine-api-operator-5694c8668f-j2q7s\" (UID: \"56f1b088-2293-4064-b76b-40b9bc9ef3d5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.900800 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.900961 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/56f1b088-2293-4064-b76b-40b9bc9ef3d5-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-j2q7s\" (UID: \"56f1b088-2293-4064-b76b-40b9bc9ef3d5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.902010 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fedc583-ecaa-4f4a-842b-f5276040b18c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wxc56\" (UID: \"3fedc583-ecaa-4f4a-842b-f5276040b18c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.902144 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4d2781f-afa7-44e3-967b-08aaea623583-serving-cert\") pod \"openshift-config-operator-7777fb866f-2qrng\" (UID: \"f4d2781f-afa7-44e3-967b-08aaea623583\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.902161 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.902388 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.902442 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.902755 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.902827 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.902899 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-client-ca\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.902954 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.902979 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.903019 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.903052 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.903090 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.903125 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.902750 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/56f1b088-2293-4064-b76b-40b9bc9ef3d5-images\") pod \"machine-api-operator-5694c8668f-j2q7s\" (UID: \"56f1b088-2293-4064-b76b-40b9bc9ef3d5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.903220 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.903240 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.903337 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.903344 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.903406 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.903696 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.903777 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.903955 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-serving-cert\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.903971 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.904145 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.907353 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5v2bh"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.908067 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.908183 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.910798 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-config\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.915257 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-serving-cert\") pod \"route-controller-manager-6576b87f9c-m6fdf\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.917367 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.917665 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.918901 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.919032 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.919120 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.919193 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.919214 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.919297 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.921386 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.925075 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-stgmg"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.925715 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.931030 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.926381 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.933627 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.934330 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.942421 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.942654 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.952030 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.952030 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.959292 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.961120 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.962959 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-qc97s"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.964356 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8sjsp"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.964365 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qc97s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.966820 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-j2q7s"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.967007 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.967417 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.968070 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.968664 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.971141 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pwgd8"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.971766 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pwgd8" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.971969 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.972562 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.973220 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.974000 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.976060 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.976400 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.976565 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.976990 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.977366 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.978651 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.979160 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h4ql7"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.979789 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-h4ql7" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.980670 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6hpsd"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.981332 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.982201 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.983315 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.988690 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.989111 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.989636 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.989744 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.989904 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.991093 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.991484 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993040 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfaa9666-5e7d-4a64-8bc5-1936748f9375-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9mhw2\" (UID: \"bfaa9666-5e7d-4a64-8bc5-1936748f9375\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993076 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d01c20ec-32e4-4ffe-af84-a7e75df66733-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h8sjn\" (UID: \"d01c20ec-32e4-4ffe-af84-a7e75df66733\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993098 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993116 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-encryption-config\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993134 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-etcd-client\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993155 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a47f63-146d-4621-8bd2-fdb469f0fc8a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993174 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjszw\" (UniqueName: \"kubernetes.io/projected/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-kube-api-access-kjszw\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993190 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhppt\" (UniqueName: \"kubernetes.io/projected/bfaa9666-5e7d-4a64-8bc5-1936748f9375-kube-api-access-lhppt\") pod \"cluster-samples-operator-665b6dd947-9mhw2\" (UID: \"bfaa9666-5e7d-4a64-8bc5-1936748f9375\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993221 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-audit-dir\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993260 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d01c20ec-32e4-4ffe-af84-a7e75df66733-config\") pod \"kube-controller-manager-operator-78b949d7b-h8sjn\" (UID: \"d01c20ec-32e4-4ffe-af84-a7e75df66733\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993290 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-audit-policies\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993310 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vrm9\" (UniqueName: \"kubernetes.io/projected/50a47f63-146d-4621-8bd2-fdb469f0fc8a-kube-api-access-2vrm9\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993332 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-serving-cert\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993351 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d01c20ec-32e4-4ffe-af84-a7e75df66733-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h8sjn\" (UID: \"d01c20ec-32e4-4ffe-af84-a7e75df66733\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993366 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a47f63-146d-4621-8bd2-fdb469f0fc8a-config\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993395 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993414 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50a47f63-146d-4621-8bd2-fdb469f0fc8a-serving-cert\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993431 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a47f63-146d-4621-8bd2-fdb469f0fc8a-service-ca-bundle\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.993895 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.994155 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a47f63-146d-4621-8bd2-fdb469f0fc8a-service-ca-bundle\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.994439 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d01c20ec-32e4-4ffe-af84-a7e75df66733-config\") pod \"kube-controller-manager-operator-78b949d7b-h8sjn\" (UID: \"d01c20ec-32e4-4ffe-af84-a7e75df66733\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.994836 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.995263 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-audit-dir\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.995518 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-audit-policies\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.995861 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50a47f63-146d-4621-8bd2-fdb469f0fc8a-config\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.997988 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gtwl2"] Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.998535 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a47f63-146d-4621-8bd2-fdb469f0fc8a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:49 crc kubenswrapper[4875]: I0130 16:58:49.999191 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-encryption-config\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:49.999849 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfaa9666-5e7d-4a64-8bc5-1936748f9375-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9mhw2\" (UID: \"bfaa9666-5e7d-4a64-8bc5-1936748f9375\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:49.999926 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2qrng"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.001641 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.002647 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pgmbb"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.003287 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.004136 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-serving-cert\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.004177 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.004468 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-etcd-client\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.005768 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2d4sj"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.006383 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50a47f63-146d-4621-8bd2-fdb469f0fc8a-serving-cert\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.006560 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gv6jw"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.009374 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.009442 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8ft6n"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.011576 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.011619 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.011737 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8ft6n" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.013371 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.014831 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.016106 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.017619 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ht6ll"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.018799 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-stgmg"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.019854 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.021821 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7s4zv"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.022396 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6hpsd"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.023495 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.024395 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.024595 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d01c20ec-32e4-4ffe-af84-a7e75df66733-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h8sjn\" (UID: \"d01c20ec-32e4-4ffe-af84-a7e75df66733\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.025374 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.031324 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.034315 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pgmbb"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.040091 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.043958 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.045121 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.045222 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.046280 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vcs72"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.047848 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-nsbwm"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.048624 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nsbwm" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.049036 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8sjsp"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.050524 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.052033 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.053772 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pwgd8"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.055407 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.056783 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qtgzv"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.058239 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.058638 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.059670 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qc97s"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.060661 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.061935 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nsbwm"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.063051 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h4ql7"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.064253 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mfrmm"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.065518 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5v28g"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.065674 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mfrmm" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.067029 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mfrmm"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.067131 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.068000 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5v28g"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.114549 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwh5p\" (UniqueName: \"kubernetes.io/projected/3fedc583-ecaa-4f4a-842b-f5276040b18c-kube-api-access-bwh5p\") pod \"openshift-apiserver-operator-796bbdcf4f-wxc56\" (UID: \"3fedc583-ecaa-4f4a-842b-f5276040b18c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.132075 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsl2v\" (UniqueName: \"kubernetes.io/projected/56f1b088-2293-4064-b76b-40b9bc9ef3d5-kube-api-access-fsl2v\") pod \"machine-api-operator-5694c8668f-j2q7s\" (UID: \"56f1b088-2293-4064-b76b-40b9bc9ef3d5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.152978 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dthvk\" (UniqueName: \"kubernetes.io/projected/f4d2781f-afa7-44e3-967b-08aaea623583-kube-api-access-dthvk\") pod \"openshift-config-operator-7777fb866f-2qrng\" (UID: \"f4d2781f-afa7-44e3-967b-08aaea623583\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.185628 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.193468 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dn8g\" (UniqueName: \"kubernetes.io/projected/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-kube-api-access-2dn8g\") pod \"controller-manager-879f6c89f-qtgzv\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.196029 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf77x\" (UniqueName: \"kubernetes.io/projected/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-kube-api-access-bf77x\") pod \"route-controller-manager-6576b87f9c-m6fdf\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.221403 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.223162 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.236491 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.238761 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.261731 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.280121 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.303200 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.319887 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.340076 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.359304 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.379176 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.395778 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2qrng"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.399627 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.403338 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.411955 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.412111 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.419983 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 16:58:50 crc kubenswrapper[4875]: W0130 16:58:50.429689 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1d4e20b_8815_42d1_b8e3_8d0f67d73860.slice/crio-d558c57d5d38ea317b6f8fc68ab83b7d7cf4a702d1dc9412c55283deeb99f100 WatchSource:0}: Error finding container d558c57d5d38ea317b6f8fc68ab83b7d7cf4a702d1dc9412c55283deeb99f100: Status 404 returned error can't find the container with id d558c57d5d38ea317b6f8fc68ab83b7d7cf4a702d1dc9412c55283deeb99f100 Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.443042 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.446373 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qtgzv"] Jan 30 16:58:50 crc kubenswrapper[4875]: W0130 16:58:50.453467 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb48b7a95_33c5_4ba6_a827_1fc5b36d49ec.slice/crio-90e84a0382fa26d5169143f45818d00fbf5cc99fb600a96d75ae702cd1aea043 WatchSource:0}: Error finding container 90e84a0382fa26d5169143f45818d00fbf5cc99fb600a96d75ae702cd1aea043: Status 404 returned error can't find the container with id 90e84a0382fa26d5169143f45818d00fbf5cc99fb600a96d75ae702cd1aea043 Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.459141 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.478612 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.499939 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.519984 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.540053 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.559372 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.578839 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.583967 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56"] Jan 30 16:58:50 crc kubenswrapper[4875]: W0130 16:58:50.592505 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fedc583_ecaa_4f4a_842b_f5276040b18c.slice/crio-aa3b581ab77df825248f21201007e51fb3e70c69e840d0ffa6d6afb77b57b18b WatchSource:0}: Error finding container aa3b581ab77df825248f21201007e51fb3e70c69e840d0ffa6d6afb77b57b18b: Status 404 returned error can't find the container with id aa3b581ab77df825248f21201007e51fb3e70c69e840d0ffa6d6afb77b57b18b Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.602195 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.619139 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.640289 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-j2q7s"] Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.643619 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.659165 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 16:58:50 crc kubenswrapper[4875]: W0130 16:58:50.662750 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56f1b088_2293_4064_b76b_40b9bc9ef3d5.slice/crio-d5243a16ff683f9ecef577556073642d108be7bc507caf849b233b5082e4aa3e WatchSource:0}: Error finding container d5243a16ff683f9ecef577556073642d108be7bc507caf849b233b5082e4aa3e: Status 404 returned error can't find the container with id d5243a16ff683f9ecef577556073642d108be7bc507caf849b233b5082e4aa3e Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.678176 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.698949 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.718570 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.739086 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.758666 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.779014 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.800139 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.819354 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.839622 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.859332 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.879909 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.890054 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" event={"ID":"e1d4e20b-8815-42d1-b8e3-8d0f67d73860","Type":"ContainerStarted","Data":"813340ee1ae349b91deab35ede41b17df4ef1d45139276599da9bd490d1cba4b"} Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.890097 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" event={"ID":"e1d4e20b-8815-42d1-b8e3-8d0f67d73860","Type":"ContainerStarted","Data":"d558c57d5d38ea317b6f8fc68ab83b7d7cf4a702d1dc9412c55283deeb99f100"} Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.890260 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.891251 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" event={"ID":"3fedc583-ecaa-4f4a-842b-f5276040b18c","Type":"ContainerStarted","Data":"01eadf09bd84421d4f4f727dcf389235fcb435f74b1c65746e07c0ada11e4789"} Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.891297 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" event={"ID":"3fedc583-ecaa-4f4a-842b-f5276040b18c","Type":"ContainerStarted","Data":"aa3b581ab77df825248f21201007e51fb3e70c69e840d0ffa6d6afb77b57b18b"} Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.892501 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" event={"ID":"f4d2781f-afa7-44e3-967b-08aaea623583","Type":"ContainerStarted","Data":"b4ca77e153c7c7225178c2b52b94404c1c3ed6948c68f17b86a3d6f1d959faeb"} Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.892531 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" event={"ID":"f4d2781f-afa7-44e3-967b-08aaea623583","Type":"ContainerStarted","Data":"552d37a21e28898655c192f1da20e3cebf2ec3844401fc76a8a3a1e90bdd3bf5"} Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.893065 4875 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-m6fdf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.893165 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" podUID="e1d4e20b-8815-42d1-b8e3-8d0f67d73860" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.893653 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" event={"ID":"56f1b088-2293-4064-b76b-40b9bc9ef3d5","Type":"ContainerStarted","Data":"b3ada300d7190a217033e71285faa33a19dbec7dd425f7515395b8099ffb018f"} Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.893689 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" event={"ID":"56f1b088-2293-4064-b76b-40b9bc9ef3d5","Type":"ContainerStarted","Data":"d5243a16ff683f9ecef577556073642d108be7bc507caf849b233b5082e4aa3e"} Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.895130 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" event={"ID":"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec","Type":"ContainerStarted","Data":"4062a8596051612270e4d7f53be7c400b8c427f4690f6ffd505d43171bb545dc"} Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.895157 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" event={"ID":"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec","Type":"ContainerStarted","Data":"90e84a0382fa26d5169143f45818d00fbf5cc99fb600a96d75ae702cd1aea043"} Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.895426 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.897415 4875 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qtgzv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.897449 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" podUID="b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.899305 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.919756 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.940379 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.958332 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.976664 4875 request.go:700] Waited for 1.00443996s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&limit=500&resourceVersion=0 Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.978762 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 16:58:50 crc kubenswrapper[4875]: I0130 16:58:50.999026 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.019534 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.040116 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.058887 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.079807 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.099008 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.119859 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.139329 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.160607 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.180082 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.200338 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.219386 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.239913 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.260072 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.280187 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.300137 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.319807 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.339522 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.359490 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.379040 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.405682 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.418838 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.438857 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.458889 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.479805 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.499429 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.519985 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.538979 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.559621 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.600059 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d01c20ec-32e4-4ffe-af84-a7e75df66733-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h8sjn\" (UID: \"d01c20ec-32e4-4ffe-af84-a7e75df66733\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.637472 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vrm9\" (UniqueName: \"kubernetes.io/projected/50a47f63-146d-4621-8bd2-fdb469f0fc8a-kube-api-access-2vrm9\") pod \"authentication-operator-69f744f599-gtwl2\" (UID: \"50a47f63-146d-4621-8bd2-fdb469f0fc8a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.646246 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjszw\" (UniqueName: \"kubernetes.io/projected/fa7f2369-f741-4a6e-af2c-4ead754f7ea4-kube-api-access-kjszw\") pod \"apiserver-7bbb656c7d-flhcf\" (UID: \"fa7f2369-f741-4a6e-af2c-4ead754f7ea4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.659182 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.665622 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhppt\" (UniqueName: \"kubernetes.io/projected/bfaa9666-5e7d-4a64-8bc5-1936748f9375-kube-api-access-lhppt\") pod \"cluster-samples-operator-665b6dd947-9mhw2\" (UID: \"bfaa9666-5e7d-4a64-8bc5-1936748f9375\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.680153 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.698550 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.718888 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.739936 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.752266 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.758896 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.780328 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.793723 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.799036 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.819890 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.839367 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.858475 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.859537 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.873984 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.879900 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.900774 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.922392 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.922543 4875 generic.go:334] "Generic (PLEG): container finished" podID="f4d2781f-afa7-44e3-967b-08aaea623583" containerID="b4ca77e153c7c7225178c2b52b94404c1c3ed6948c68f17b86a3d6f1d959faeb" exitCode=0 Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.922732 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" event={"ID":"f4d2781f-afa7-44e3-967b-08aaea623583","Type":"ContainerDied","Data":"b4ca77e153c7c7225178c2b52b94404c1c3ed6948c68f17b86a3d6f1d959faeb"} Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.927752 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" event={"ID":"56f1b088-2293-4064-b76b-40b9bc9ef3d5","Type":"ContainerStarted","Data":"7e5d9fcbd8e9b9ec8245f567467db728a1057b32c1b63926b09d6219c3c7e120"} Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.928466 4875 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qtgzv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.928520 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" podUID="b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.928916 4875 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-m6fdf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.928970 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" podUID="e1d4e20b-8815-42d1-b8e3-8d0f67d73860" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.938652 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.947712 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gtwl2"] Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.958375 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.976922 4875 request.go:700] Waited for 1.909512972s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&limit=500&resourceVersion=0 Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.979123 4875 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 16:58:51 crc kubenswrapper[4875]: I0130 16:58:51.999540 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.010527 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf"] Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026110 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-service-ca\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026147 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c283ead9-a8b9-43ff-8188-5c583e3863f4-auth-proxy-config\") pod \"machine-approver-56656f9798-v7xv7\" (UID: \"c283ead9-a8b9-43ff-8188-5c583e3863f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026171 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndkxr\" (UniqueName: \"kubernetes.io/projected/a764d0e3-2762-4d13-b92e-30e68c104bf6-kube-api-access-ndkxr\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026204 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-registry-tls\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026221 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f681b0b0-d68c-44b4-816e-86756d55542c-registry-certificates\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026236 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a764d0e3-2762-4d13-b92e-30e68c104bf6-audit-dir\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026254 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-oauth-config\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026284 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8555787c-19c9-49c3-8b1a-7261cb693b97-trusted-ca\") pod \"ingress-operator-5b745b69d9-scxjx\" (UID: \"8555787c-19c9-49c3-8b1a-7261cb693b97\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026302 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-etcd-serving-ca\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026334 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b4f3833-7619-485d-9cee-761a80d9f294-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-29blr\" (UID: \"1b4f3833-7619-485d-9cee-761a80d9f294\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026355 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026377 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-config\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026396 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crkrg\" (UniqueName: \"kubernetes.io/projected/c283ead9-a8b9-43ff-8188-5c583e3863f4-kube-api-access-crkrg\") pod \"machine-approver-56656f9798-v7xv7\" (UID: \"c283ead9-a8b9-43ff-8188-5c583e3863f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026412 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b4f3833-7619-485d-9cee-761a80d9f294-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-29blr\" (UID: \"1b4f3833-7619-485d-9cee-761a80d9f294\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026427 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026442 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edaae5aa-0654-4349-9473-907e90886e59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4gqn8\" (UID: \"edaae5aa-0654-4349-9473-907e90886e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026478 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026493 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026509 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdq7v\" (UniqueName: \"kubernetes.io/projected/1b4f3833-7619-485d-9cee-761a80d9f294-kube-api-access-jdq7v\") pod \"cluster-image-registry-operator-dc59b4c8b-29blr\" (UID: \"1b4f3833-7619-485d-9cee-761a80d9f294\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026524 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-audit\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026543 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv6hk\" (UniqueName: \"kubernetes.io/projected/8555787c-19c9-49c3-8b1a-7261cb693b97-kube-api-access-dv6hk\") pod \"ingress-operator-5b745b69d9-scxjx\" (UID: \"8555787c-19c9-49c3-8b1a-7261cb693b97\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026774 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz2mb\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-kube-api-access-qz2mb\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026816 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026863 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vll6\" (UniqueName: \"kubernetes.io/projected/ac862908-f2bf-42a2-b453-12f722f2cae3-kube-api-access-8vll6\") pod \"dns-operator-744455d44c-ht6ll\" (UID: \"ac862908-f2bf-42a2-b453-12f722f2cae3\") " pod="openshift-dns-operator/dns-operator-744455d44c-ht6ll" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026885 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-image-import-ca\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026913 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c283ead9-a8b9-43ff-8188-5c583e3863f4-config\") pod \"machine-approver-56656f9798-v7xv7\" (UID: \"c283ead9-a8b9-43ff-8188-5c583e3863f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026935 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026963 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.026986 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027011 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-config\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027035 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b4f3833-7619-485d-9cee-761a80d9f294-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-29blr\" (UID: \"1b4f3833-7619-485d-9cee-761a80d9f294\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027074 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027095 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac862908-f2bf-42a2-b453-12f722f2cae3-metrics-tls\") pod \"dns-operator-744455d44c-ht6ll\" (UID: \"ac862908-f2bf-42a2-b453-12f722f2cae3\") " pod="openshift-dns-operator/dns-operator-744455d44c-ht6ll" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027148 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-trusted-ca-bundle\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027171 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-oauth-serving-cert\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027193 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edaae5aa-0654-4349-9473-907e90886e59-config\") pod \"kube-apiserver-operator-766d6c64bb-4gqn8\" (UID: \"edaae5aa-0654-4349-9473-907e90886e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027217 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38f0d965-f1ec-4d01-9155-d3740a9ce78f-serving-cert\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027263 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027294 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/38f0d965-f1ec-4d01-9155-d3740a9ce78f-node-pullsecrets\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027344 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-audit-policies\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027378 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f681b0b0-d68c-44b4-816e-86756d55542c-trusted-ca\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027400 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027421 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027449 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027470 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/edaae5aa-0654-4349-9473-907e90886e59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4gqn8\" (UID: \"edaae5aa-0654-4349-9473-907e90886e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027491 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c283ead9-a8b9-43ff-8188-5c583e3863f4-machine-approver-tls\") pod \"machine-approver-56656f9798-v7xv7\" (UID: \"c283ead9-a8b9-43ff-8188-5c583e3863f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027523 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-serving-cert\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027550 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f681b0b0-d68c-44b4-816e-86756d55542c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027571 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsdjs\" (UniqueName: \"kubernetes.io/projected/37fa5454-ad47-4960-be87-5d9d4e4eab0f-kube-api-access-hsdjs\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027611 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8555787c-19c9-49c3-8b1a-7261cb693b97-bound-sa-token\") pod \"ingress-operator-5b745b69d9-scxjx\" (UID: \"8555787c-19c9-49c3-8b1a-7261cb693b97\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027635 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f681b0b0-d68c-44b4-816e-86756d55542c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027655 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-bound-sa-token\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027676 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/38f0d965-f1ec-4d01-9155-d3740a9ce78f-etcd-client\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.027702 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8555787c-19c9-49c3-8b1a-7261cb693b97-metrics-tls\") pod \"ingress-operator-5b745b69d9-scxjx\" (UID: \"8555787c-19c9-49c3-8b1a-7261cb693b97\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:52 crc kubenswrapper[4875]: E0130 16:58:52.027754 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:52.527741229 +0000 UTC m=+143.075104612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:52 crc kubenswrapper[4875]: W0130 16:58:52.046934 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa7f2369_f741_4a6e_af2c_4ead754f7ea4.slice/crio-b1c2a48e37d2a05078dd1965d8fabde70ea13f3d29ba91d97abf82b8823b4ecf WatchSource:0}: Error finding container b1c2a48e37d2a05078dd1965d8fabde70ea13f3d29ba91d97abf82b8823b4ecf: Status 404 returned error can't find the container with id b1c2a48e37d2a05078dd1965d8fabde70ea13f3d29ba91d97abf82b8823b4ecf Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.096185 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2"] Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.115016 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn"] Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.128058 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:52 crc kubenswrapper[4875]: E0130 16:58:52.128132 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:52.628118088 +0000 UTC m=+143.175481471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.128531 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdq7v\" (UniqueName: \"kubernetes.io/projected/1b4f3833-7619-485d-9cee-761a80d9f294-kube-api-access-jdq7v\") pod \"cluster-image-registry-operator-dc59b4c8b-29blr\" (UID: \"1b4f3833-7619-485d-9cee-761a80d9f294\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:52 crc kubenswrapper[4875]: W0130 16:58:52.128542 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd01c20ec_32e4_4ffe_af84_a7e75df66733.slice/crio-9fa2a46c2d8edc9116ab312ad2473bd45f1fb19e89a4bcf64c1efc2488efb6e3 WatchSource:0}: Error finding container 9fa2a46c2d8edc9116ab312ad2473bd45f1fb19e89a4bcf64c1efc2488efb6e3: Status 404 returned error can't find the container with id 9fa2a46c2d8edc9116ab312ad2473bd45f1fb19e89a4bcf64c1efc2488efb6e3 Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.128558 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45958d91-5d71-4ecc-9174-75d0d4e22f5d-config\") pod \"service-ca-operator-777779d784-r4zq5\" (UID: \"45958d91-5d71-4ecc-9174-75d0d4e22f5d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.128574 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfbnj\" (UniqueName: \"kubernetes.io/projected/45958d91-5d71-4ecc-9174-75d0d4e22f5d-kube-api-access-qfbnj\") pod \"service-ca-operator-777779d784-r4zq5\" (UID: \"45958d91-5d71-4ecc-9174-75d0d4e22f5d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.128613 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d15a27f-97a8-4c8e-8450-5266afa2d382-default-certificate\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.128633 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv6hk\" (UniqueName: \"kubernetes.io/projected/8555787c-19c9-49c3-8b1a-7261cb693b97-kube-api-access-dv6hk\") pod \"ingress-operator-5b745b69d9-scxjx\" (UID: \"8555787c-19c9-49c3-8b1a-7261cb693b97\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.128647 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/16d079a0-8b15-4afe-b80b-29edde7f9251-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h4ql7\" (UID: \"16d079a0-8b15-4afe-b80b-29edde7f9251\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h4ql7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.128662 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ce1959e-9d34-4221-8ede-5ec652b44b0d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2njb9\" (UID: \"0ce1959e-9d34-4221-8ede-5ec652b44b0d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.128681 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz2mb\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-kube-api-access-qz2mb\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.128698 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.128715 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94dc77e6-c491-4bda-a95f-6ab4892d06db-secret-volume\") pod \"collect-profiles-29496525-tcxvt\" (UID: \"94dc77e6-c491-4bda-a95f-6ab4892d06db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.128825 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rclr\" (UniqueName: \"kubernetes.io/projected/16d079a0-8b15-4afe-b80b-29edde7f9251-kube-api-access-5rclr\") pod \"multus-admission-controller-857f4d67dd-h4ql7\" (UID: \"16d079a0-8b15-4afe-b80b-29edde7f9251\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h4ql7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.128877 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhxj6\" (UniqueName: \"kubernetes.io/projected/beaaba45-df33-4540-ab78-79f1dc92f87b-kube-api-access-lhxj6\") pod \"marketplace-operator-79b997595-6hpsd\" (UID: \"beaaba45-df33-4540-ab78-79f1dc92f87b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129088 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c315c604-594d-4069-823c-9859b87e22c7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrtj9\" (UID: \"c315c604-594d-4069-823c-9859b87e22c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129132 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vll6\" (UniqueName: \"kubernetes.io/projected/ac862908-f2bf-42a2-b453-12f722f2cae3-kube-api-access-8vll6\") pod \"dns-operator-744455d44c-ht6ll\" (UID: \"ac862908-f2bf-42a2-b453-12f722f2cae3\") " pod="openshift-dns-operator/dns-operator-744455d44c-ht6ll" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129160 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pps4g\" (UniqueName: \"kubernetes.io/projected/d950d064-e8ae-47c8-adb8-cb60ba5bd5b9-kube-api-access-pps4g\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmcnv\" (UID: \"d950d064-e8ae-47c8-adb8-cb60ba5bd5b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129237 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ef68fb4-d9c6-484f-a05e-a8e5d3460a28-trusted-ca\") pod \"console-operator-58897d9998-stgmg\" (UID: \"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28\") " pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129273 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5twl6\" (UniqueName: \"kubernetes.io/projected/0ce1959e-9d34-4221-8ede-5ec652b44b0d-kube-api-access-5twl6\") pod \"control-plane-machine-set-operator-78cbb6b69f-2njb9\" (UID: \"0ce1959e-9d34-4221-8ede-5ec652b44b0d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129410 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c283ead9-a8b9-43ff-8188-5c583e3863f4-config\") pod \"machine-approver-56656f9798-v7xv7\" (UID: \"c283ead9-a8b9-43ff-8188-5c583e3863f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129442 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntskk\" (UniqueName: \"kubernetes.io/projected/35f19686-9d5d-470f-8431-24ba28e8237e-kube-api-access-ntskk\") pod \"olm-operator-6b444d44fb-7d4p5\" (UID: \"35f19686-9d5d-470f-8431-24ba28e8237e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129458 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2dfb8d2a-73aa-4723-b1aa-46346691c4c1-tmpfs\") pod \"packageserver-d55dfcdfc-69xn8\" (UID: \"2dfb8d2a-73aa-4723-b1aa-46346691c4c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129482 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b7997c32-6e00-4402-acfb-d3bf63227f0b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dsrht\" (UID: \"b7997c32-6e00-4402-acfb-d3bf63227f0b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129500 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f5xk\" (UniqueName: \"kubernetes.io/projected/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-kube-api-access-6f5xk\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129528 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129548 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129572 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdjjd\" (UniqueName: \"kubernetes.io/projected/ee0a3d54-45e8-4e3b-9bed-bae82d409c21-kube-api-access-vdjjd\") pod \"catalog-operator-68c6474976-9z79s\" (UID: \"ee0a3d54-45e8-4e3b-9bed-bae82d409c21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129608 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-config\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.129985 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c283ead9-a8b9-43ff-8188-5c583e3863f4-config\") pod \"machine-approver-56656f9798-v7xv7\" (UID: \"c283ead9-a8b9-43ff-8188-5c583e3863f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.130552 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2dfb8d2a-73aa-4723-b1aa-46346691c4c1-webhook-cert\") pod \"packageserver-d55dfcdfc-69xn8\" (UID: \"2dfb8d2a-73aa-4723-b1aa-46346691c4c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.130598 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d15a27f-97a8-4c8e-8450-5266afa2d382-metrics-certs\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.130636 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.130662 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac862908-f2bf-42a2-b453-12f722f2cae3-metrics-tls\") pod \"dns-operator-744455d44c-ht6ll\" (UID: \"ac862908-f2bf-42a2-b453-12f722f2cae3\") " pod="openshift-dns-operator/dns-operator-744455d44c-ht6ll" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.130684 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/35f19686-9d5d-470f-8431-24ba28e8237e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7d4p5\" (UID: \"35f19686-9d5d-470f-8431-24ba28e8237e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.130762 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38f0d965-f1ec-4d01-9155-d3740a9ce78f-serving-cert\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.130785 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvk7g\" (UniqueName: \"kubernetes.io/projected/fd902c0b-6664-425d-ad65-dd2069a17fae-kube-api-access-rvk7g\") pod \"ingress-canary-nsbwm\" (UID: \"fd902c0b-6664-425d-ad65-dd2069a17fae\") " pod="openshift-ingress-canary/ingress-canary-nsbwm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.130818 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/beaaba45-df33-4540-ab78-79f1dc92f87b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6hpsd\" (UID: \"beaaba45-df33-4540-ab78-79f1dc92f87b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.130892 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td2kp\" (UniqueName: \"kubernetes.io/projected/2dfb8d2a-73aa-4723-b1aa-46346691c4c1-kube-api-access-td2kp\") pod \"packageserver-d55dfcdfc-69xn8\" (UID: \"2dfb8d2a-73aa-4723-b1aa-46346691c4c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.130916 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c315c604-594d-4069-823c-9859b87e22c7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrtj9\" (UID: \"c315c604-594d-4069-823c-9859b87e22c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.130984 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d950d064-e8ae-47c8-adb8-cb60ba5bd5b9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmcnv\" (UID: \"d950d064-e8ae-47c8-adb8-cb60ba5bd5b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131013 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0dce9182-7f6f-48d8-a9bf-096fd7ca43ac-proxy-tls\") pod \"machine-config-operator-74547568cd-ztrcm\" (UID: \"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131041 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131062 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131167 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131190 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-serving-cert\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131199 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-config\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131210 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f681b0b0-d68c-44b4-816e-86756d55542c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131235 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsdjs\" (UniqueName: \"kubernetes.io/projected/37fa5454-ad47-4960-be87-5d9d4e4eab0f-kube-api-access-hsdjs\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131256 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a96b097b-e9f2-4e75-a458-332b3000cae6-certs\") pod \"machine-config-server-8ft6n\" (UID: \"a96b097b-e9f2-4e75-a458-332b3000cae6\") " pod="openshift-machine-config-operator/machine-config-server-8ft6n" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131277 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f681b0b0-d68c-44b4-816e-86756d55542c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131298 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/38f0d965-f1ec-4d01-9155-d3740a9ce78f-etcd-client\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131344 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/69db4421-c7a4-42f0-9138-e132dda1bd51-signing-cabundle\") pod \"service-ca-9c57cc56f-pgmbb\" (UID: \"69db4421-c7a4-42f0-9138-e132dda1bd51\") " pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131365 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45958d91-5d71-4ecc-9174-75d0d4e22f5d-serving-cert\") pod \"service-ca-operator-777779d784-r4zq5\" (UID: \"45958d91-5d71-4ecc-9174-75d0d4e22f5d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131388 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d296\" (UniqueName: \"kubernetes.io/projected/ad3fff93-6553-4492-8bf6-03118aa9f089-kube-api-access-2d296\") pod \"dns-default-mfrmm\" (UID: \"ad3fff93-6553-4492-8bf6-03118aa9f089\") " pod="openshift-dns/dns-default-mfrmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131410 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b7997c32-6e00-4402-acfb-d3bf63227f0b-proxy-tls\") pod \"machine-config-controller-84d6567774-dsrht\" (UID: \"b7997c32-6e00-4402-acfb-d3bf63227f0b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131506 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-service-ca\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131529 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a96b097b-e9f2-4e75-a458-332b3000cae6-node-bootstrap-token\") pod \"machine-config-server-8ft6n\" (UID: \"a96b097b-e9f2-4e75-a458-332b3000cae6\") " pod="openshift-machine-config-operator/machine-config-server-8ft6n" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131549 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0dce9182-7f6f-48d8-a9bf-096fd7ca43ac-images\") pod \"machine-config-operator-74547568cd-ztrcm\" (UID: \"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131593 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-etcd-serving-ca\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131618 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee0a3d54-45e8-4e3b-9bed-bae82d409c21-srv-cert\") pod \"catalog-operator-68c6474976-9z79s\" (UID: \"ee0a3d54-45e8-4e3b-9bed-bae82d409c21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131644 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b4f3833-7619-485d-9cee-761a80d9f294-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-29blr\" (UID: \"1b4f3833-7619-485d-9cee-761a80d9f294\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131665 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94dc77e6-c491-4bda-a95f-6ab4892d06db-config-volume\") pod \"collect-profiles-29496525-tcxvt\" (UID: \"94dc77e6-c491-4bda-a95f-6ab4892d06db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131708 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-config\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131730 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef68fb4-d9c6-484f-a05e-a8e5d3460a28-serving-cert\") pod \"console-operator-58897d9998-stgmg\" (UID: \"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28\") " pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131766 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b4f3833-7619-485d-9cee-761a80d9f294-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-29blr\" (UID: \"1b4f3833-7619-485d-9cee-761a80d9f294\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131794 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9gxs\" (UniqueName: \"kubernetes.io/projected/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-kube-api-access-q9gxs\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131814 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ks6j\" (UniqueName: \"kubernetes.io/projected/0d15a27f-97a8-4c8e-8450-5266afa2d382-kube-api-access-2ks6j\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131837 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131862 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edaae5aa-0654-4349-9473-907e90886e59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4gqn8\" (UID: \"edaae5aa-0654-4349-9473-907e90886e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.131883 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be8257d7-3aa4-406a-9f47-bda46f688e32-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mjwmm\" (UID: \"be8257d7-3aa4-406a-9f47-bda46f688e32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132036 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-plugins-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132070 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-csi-data-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132102 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132128 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132149 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-audit\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132199 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35f19686-9d5d-470f-8431-24ba28e8237e-srv-cert\") pod \"olm-operator-6b444d44fb-7d4p5\" (UID: \"35f19686-9d5d-470f-8431-24ba28e8237e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132236 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kh6t\" (UniqueName: \"kubernetes.io/projected/8ef68fb4-d9c6-484f-a05e-a8e5d3460a28-kube-api-access-9kh6t\") pod \"console-operator-58897d9998-stgmg\" (UID: \"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28\") " pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132267 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/80518ae7-5ae1-40f4-8551-c97d8dfe4433-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s24dp\" (UID: \"80518ae7-5ae1-40f4-8551-c97d8dfe4433\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132302 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl87g\" (UniqueName: \"kubernetes.io/projected/38f0d965-f1ec-4d01-9155-d3740a9ce78f-kube-api-access-zl87g\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132330 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-image-import-ca\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132351 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-etcd-ca\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132375 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-socket-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132396 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-etcd-client\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132420 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132441 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad3fff93-6553-4492-8bf6-03118aa9f089-metrics-tls\") pod \"dns-default-mfrmm\" (UID: \"ad3fff93-6553-4492-8bf6-03118aa9f089\") " pod="openshift-dns/dns-default-mfrmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132459 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d15a27f-97a8-4c8e-8450-5266afa2d382-stats-auth\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132495 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b4f3833-7619-485d-9cee-761a80d9f294-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-29blr\" (UID: \"1b4f3833-7619-485d-9cee-761a80d9f294\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132518 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt2w5\" (UniqueName: \"kubernetes.io/projected/94dc77e6-c491-4bda-a95f-6ab4892d06db-kube-api-access-zt2w5\") pod \"collect-profiles-29496525-tcxvt\" (UID: \"94dc77e6-c491-4bda-a95f-6ab4892d06db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132537 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-registration-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132569 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd902c0b-6664-425d-ad65-dd2069a17fae-cert\") pod \"ingress-canary-nsbwm\" (UID: \"fd902c0b-6664-425d-ad65-dd2069a17fae\") " pod="openshift-ingress-canary/ingress-canary-nsbwm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132611 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cpzq\" (UniqueName: \"kubernetes.io/projected/53ad913d-a076-4972-93ae-1271d4c2ab76-kube-api-access-8cpzq\") pod \"migrator-59844c95c7-pwgd8\" (UID: \"53ad913d-a076-4972-93ae-1271d4c2ab76\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pwgd8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.132971 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133006 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0dce9182-7f6f-48d8-a9bf-096fd7ca43ac-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ztrcm\" (UID: \"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133063 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-trusted-ca-bundle\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133086 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-oauth-serving-cert\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133109 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edaae5aa-0654-4349-9473-907e90886e59-config\") pod \"kube-apiserver-operator-766d6c64bb-4gqn8\" (UID: \"edaae5aa-0654-4349-9473-907e90886e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133147 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133170 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/38f0d965-f1ec-4d01-9155-d3740a9ce78f-node-pullsecrets\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133192 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n4qh\" (UniqueName: \"kubernetes.io/projected/a96b097b-e9f2-4e75-a458-332b3000cae6-kube-api-access-5n4qh\") pod \"machine-config-server-8ft6n\" (UID: \"a96b097b-e9f2-4e75-a458-332b3000cae6\") " pod="openshift-machine-config-operator/machine-config-server-8ft6n" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133214 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-serving-cert\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133248 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-audit-policies\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133269 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/38f0d965-f1ec-4d01-9155-d3740a9ce78f-audit-dir\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133289 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-mountpoint-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133319 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f681b0b0-d68c-44b4-816e-86756d55542c-trusted-ca\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133341 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/edaae5aa-0654-4349-9473-907e90886e59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4gqn8\" (UID: \"edaae5aa-0654-4349-9473-907e90886e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133362 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjlp7\" (UniqueName: \"kubernetes.io/projected/69db4421-c7a4-42f0-9138-e132dda1bd51-kube-api-access-hjlp7\") pod \"service-ca-9c57cc56f-pgmbb\" (UID: \"69db4421-c7a4-42f0-9138-e132dda1bd51\") " pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133393 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c283ead9-a8b9-43ff-8188-5c583e3863f4-machine-approver-tls\") pod \"machine-approver-56656f9798-v7xv7\" (UID: \"c283ead9-a8b9-43ff-8188-5c583e3863f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133429 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/38f0d965-f1ec-4d01-9155-d3740a9ce78f-encryption-config\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133458 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-etcd-service-ca\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133518 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8555787c-19c9-49c3-8b1a-7261cb693b97-bound-sa-token\") pod \"ingress-operator-5b745b69d9-scxjx\" (UID: \"8555787c-19c9-49c3-8b1a-7261cb693b97\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.133559 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2dfb8d2a-73aa-4723-b1aa-46346691c4c1-apiservice-cert\") pod \"packageserver-d55dfcdfc-69xn8\" (UID: \"2dfb8d2a-73aa-4723-b1aa-46346691c4c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.134411 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.134962 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f681b0b0-d68c-44b4-816e-86756d55542c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.136198 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-bound-sa-token\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.136241 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d950d064-e8ae-47c8-adb8-cb60ba5bd5b9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmcnv\" (UID: \"d950d064-e8ae-47c8-adb8-cb60ba5bd5b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.136265 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d15a27f-97a8-4c8e-8450-5266afa2d382-service-ca-bundle\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.136330 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8555787c-19c9-49c3-8b1a-7261cb693b97-metrics-tls\") pod \"ingress-operator-5b745b69d9-scxjx\" (UID: \"8555787c-19c9-49c3-8b1a-7261cb693b97\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.136374 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ef68fb4-d9c6-484f-a05e-a8e5d3460a28-config\") pod \"console-operator-58897d9998-stgmg\" (UID: \"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28\") " pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.136394 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/69db4421-c7a4-42f0-9138-e132dda1bd51-signing-key\") pod \"service-ca-9c57cc56f-pgmbb\" (UID: \"69db4421-c7a4-42f0-9138-e132dda1bd51\") " pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.136670 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.136768 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-serving-cert\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.137449 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-image-import-ca\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.137968 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-config\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.138091 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-service-ca\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.138725 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-etcd-serving-ca\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.139188 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-audit\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: E0130 16:58:52.139245 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:52.639217872 +0000 UTC m=+143.186581255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.139365 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.139476 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38f0d965-f1ec-4d01-9155-d3740a9ce78f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.150606 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/38f0d965-f1ec-4d01-9155-d3740a9ce78f-etcd-client\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.150913 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b4f3833-7619-485d-9cee-761a80d9f294-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-29blr\" (UID: \"1b4f3833-7619-485d-9cee-761a80d9f294\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.151199 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.151642 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f681b0b0-d68c-44b4-816e-86756d55542c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.151656 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.152008 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.152126 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38f0d965-f1ec-4d01-9155-d3740a9ce78f-serving-cert\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.153044 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f681b0b0-d68c-44b4-816e-86756d55542c-trusted-ca\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.153272 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.155287 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edaae5aa-0654-4349-9473-907e90886e59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4gqn8\" (UID: \"edaae5aa-0654-4349-9473-907e90886e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.157762 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/38f0d965-f1ec-4d01-9155-d3740a9ce78f-node-pullsecrets\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.157913 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c283ead9-a8b9-43ff-8188-5c583e3863f4-machine-approver-tls\") pod \"machine-approver-56656f9798-v7xv7\" (UID: \"c283ead9-a8b9-43ff-8188-5c583e3863f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.158882 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edaae5aa-0654-4349-9473-907e90886e59-config\") pod \"kube-apiserver-operator-766d6c64bb-4gqn8\" (UID: \"edaae5aa-0654-4349-9473-907e90886e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.159095 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffpcz\" (UniqueName: \"kubernetes.io/projected/c315c604-594d-4069-823c-9859b87e22c7-kube-api-access-ffpcz\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrtj9\" (UID: \"c315c604-594d-4069-823c-9859b87e22c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.159125 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be8257d7-3aa4-406a-9f47-bda46f688e32-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mjwmm\" (UID: \"be8257d7-3aa4-406a-9f47-bda46f688e32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.159149 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-config\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.159144 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-audit-policies\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.159180 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndkxr\" (UniqueName: \"kubernetes.io/projected/a764d0e3-2762-4d13-b92e-30e68c104bf6-kube-api-access-ndkxr\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.159262 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-oauth-serving-cert\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.159287 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c283ead9-a8b9-43ff-8188-5c583e3863f4-auth-proxy-config\") pod \"machine-approver-56656f9798-v7xv7\" (UID: \"c283ead9-a8b9-43ff-8188-5c583e3863f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.159318 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8555787c-19c9-49c3-8b1a-7261cb693b97-trusted-ca\") pod \"ingress-operator-5b745b69d9-scxjx\" (UID: \"8555787c-19c9-49c3-8b1a-7261cb693b97\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.159387 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt6hd\" (UniqueName: \"kubernetes.io/projected/69e24be4-7935-43ce-9815-ed1fa40e9933-kube-api-access-wt6hd\") pod \"downloads-7954f5f757-qc97s\" (UID: \"69e24be4-7935-43ce-9815-ed1fa40e9933\") " pod="openshift-console/downloads-7954f5f757-qc97s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.161843 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c283ead9-a8b9-43ff-8188-5c583e3863f4-auth-proxy-config\") pod \"machine-approver-56656f9798-v7xv7\" (UID: \"c283ead9-a8b9-43ff-8188-5c583e3863f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.161894 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8555787c-19c9-49c3-8b1a-7261cb693b97-trusted-ca\") pod \"ingress-operator-5b745b69d9-scxjx\" (UID: \"8555787c-19c9-49c3-8b1a-7261cb693b97\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.161986 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-registry-tls\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.162066 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8555787c-19c9-49c3-8b1a-7261cb693b97-metrics-tls\") pod \"ingress-operator-5b745b69d9-scxjx\" (UID: \"8555787c-19c9-49c3-8b1a-7261cb693b97\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.162137 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f681b0b0-d68c-44b4-816e-86756d55542c-registry-certificates\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.162144 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.162241 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a764d0e3-2762-4d13-b92e-30e68c104bf6-audit-dir\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.162425 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a764d0e3-2762-4d13-b92e-30e68c104bf6-audit-dir\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.162528 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-oauth-config\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.163355 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zrzj\" (UniqueName: \"kubernetes.io/projected/b7997c32-6e00-4402-acfb-d3bf63227f0b-kube-api-access-7zrzj\") pod \"machine-config-controller-84d6567774-dsrht\" (UID: \"b7997c32-6e00-4402-acfb-d3bf63227f0b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.163407 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.163457 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpszd\" (UniqueName: \"kubernetes.io/projected/0dce9182-7f6f-48d8-a9bf-096fd7ca43ac-kube-api-access-dpszd\") pod \"machine-config-operator-74547568cd-ztrcm\" (UID: \"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.163498 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad3fff93-6553-4492-8bf6-03118aa9f089-config-volume\") pod \"dns-default-mfrmm\" (UID: \"ad3fff93-6553-4492-8bf6-03118aa9f089\") " pod="openshift-dns/dns-default-mfrmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.163538 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crkrg\" (UniqueName: \"kubernetes.io/projected/c283ead9-a8b9-43ff-8188-5c583e3863f4-kube-api-access-crkrg\") pod \"machine-approver-56656f9798-v7xv7\" (UID: \"c283ead9-a8b9-43ff-8188-5c583e3863f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.163569 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/beaaba45-df33-4540-ab78-79f1dc92f87b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6hpsd\" (UID: \"beaaba45-df33-4540-ab78-79f1dc92f87b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.163959 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee0a3d54-45e8-4e3b-9bed-bae82d409c21-profile-collector-cert\") pod \"catalog-operator-68c6474976-9z79s\" (UID: \"ee0a3d54-45e8-4e3b-9bed-bae82d409c21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.164034 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjkqc\" (UniqueName: \"kubernetes.io/projected/80518ae7-5ae1-40f4-8551-c97d8dfe4433-kube-api-access-fjkqc\") pod \"package-server-manager-789f6589d5-s24dp\" (UID: \"80518ae7-5ae1-40f4-8551-c97d8dfe4433\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.164067 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8257d7-3aa4-406a-9f47-bda46f688e32-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mjwmm\" (UID: \"be8257d7-3aa4-406a-9f47-bda46f688e32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.166102 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.166798 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-oauth-config\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.166912 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.167293 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f681b0b0-d68c-44b4-816e-86756d55542c-registry-certificates\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.168336 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-registry-tls\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.168477 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ac862908-f2bf-42a2-b453-12f722f2cae3-metrics-tls\") pod \"dns-operator-744455d44c-ht6ll\" (UID: \"ac862908-f2bf-42a2-b453-12f722f2cae3\") " pod="openshift-dns-operator/dns-operator-744455d44c-ht6ll" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.169073 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b4f3833-7619-485d-9cee-761a80d9f294-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-29blr\" (UID: \"1b4f3833-7619-485d-9cee-761a80d9f294\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.169307 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdq7v\" (UniqueName: \"kubernetes.io/projected/1b4f3833-7619-485d-9cee-761a80d9f294-kube-api-access-jdq7v\") pod \"cluster-image-registry-operator-dc59b4c8b-29blr\" (UID: \"1b4f3833-7619-485d-9cee-761a80d9f294\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.174167 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-trusted-ca-bundle\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.186379 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv6hk\" (UniqueName: \"kubernetes.io/projected/8555787c-19c9-49c3-8b1a-7261cb693b97-kube-api-access-dv6hk\") pod \"ingress-operator-5b745b69d9-scxjx\" (UID: \"8555787c-19c9-49c3-8b1a-7261cb693b97\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.192395 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz2mb\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-kube-api-access-qz2mb\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.213757 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vll6\" (UniqueName: \"kubernetes.io/projected/ac862908-f2bf-42a2-b453-12f722f2cae3-kube-api-access-8vll6\") pod \"dns-operator-744455d44c-ht6ll\" (UID: \"ac862908-f2bf-42a2-b453-12f722f2cae3\") " pod="openshift-dns-operator/dns-operator-744455d44c-ht6ll" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266423 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:52 crc kubenswrapper[4875]: E0130 16:58:52.266569 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:52.766544128 +0000 UTC m=+143.313907511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266642 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b7997c32-6e00-4402-acfb-d3bf63227f0b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dsrht\" (UID: \"b7997c32-6e00-4402-acfb-d3bf63227f0b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266674 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f5xk\" (UniqueName: \"kubernetes.io/projected/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-kube-api-access-6f5xk\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266698 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdjjd\" (UniqueName: \"kubernetes.io/projected/ee0a3d54-45e8-4e3b-9bed-bae82d409c21-kube-api-access-vdjjd\") pod \"catalog-operator-68c6474976-9z79s\" (UID: \"ee0a3d54-45e8-4e3b-9bed-bae82d409c21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266727 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2dfb8d2a-73aa-4723-b1aa-46346691c4c1-webhook-cert\") pod \"packageserver-d55dfcdfc-69xn8\" (UID: \"2dfb8d2a-73aa-4723-b1aa-46346691c4c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266748 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d15a27f-97a8-4c8e-8450-5266afa2d382-metrics-certs\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266769 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/35f19686-9d5d-470f-8431-24ba28e8237e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7d4p5\" (UID: \"35f19686-9d5d-470f-8431-24ba28e8237e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266801 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/beaaba45-df33-4540-ab78-79f1dc92f87b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6hpsd\" (UID: \"beaaba45-df33-4540-ab78-79f1dc92f87b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266821 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvk7g\" (UniqueName: \"kubernetes.io/projected/fd902c0b-6664-425d-ad65-dd2069a17fae-kube-api-access-rvk7g\") pod \"ingress-canary-nsbwm\" (UID: \"fd902c0b-6664-425d-ad65-dd2069a17fae\") " pod="openshift-ingress-canary/ingress-canary-nsbwm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266851 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c315c604-594d-4069-823c-9859b87e22c7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrtj9\" (UID: \"c315c604-594d-4069-823c-9859b87e22c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266871 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td2kp\" (UniqueName: \"kubernetes.io/projected/2dfb8d2a-73aa-4723-b1aa-46346691c4c1-kube-api-access-td2kp\") pod \"packageserver-d55dfcdfc-69xn8\" (UID: \"2dfb8d2a-73aa-4723-b1aa-46346691c4c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266892 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d950d064-e8ae-47c8-adb8-cb60ba5bd5b9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmcnv\" (UID: \"d950d064-e8ae-47c8-adb8-cb60ba5bd5b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266913 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0dce9182-7f6f-48d8-a9bf-096fd7ca43ac-proxy-tls\") pod \"machine-config-operator-74547568cd-ztrcm\" (UID: \"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266942 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a96b097b-e9f2-4e75-a458-332b3000cae6-certs\") pod \"machine-config-server-8ft6n\" (UID: \"a96b097b-e9f2-4e75-a458-332b3000cae6\") " pod="openshift-machine-config-operator/machine-config-server-8ft6n" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266963 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/69db4421-c7a4-42f0-9138-e132dda1bd51-signing-cabundle\") pod \"service-ca-9c57cc56f-pgmbb\" (UID: \"69db4421-c7a4-42f0-9138-e132dda1bd51\") " pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.266984 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45958d91-5d71-4ecc-9174-75d0d4e22f5d-serving-cert\") pod \"service-ca-operator-777779d784-r4zq5\" (UID: \"45958d91-5d71-4ecc-9174-75d0d4e22f5d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267007 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d296\" (UniqueName: \"kubernetes.io/projected/ad3fff93-6553-4492-8bf6-03118aa9f089-kube-api-access-2d296\") pod \"dns-default-mfrmm\" (UID: \"ad3fff93-6553-4492-8bf6-03118aa9f089\") " pod="openshift-dns/dns-default-mfrmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267025 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b7997c32-6e00-4402-acfb-d3bf63227f0b-proxy-tls\") pod \"machine-config-controller-84d6567774-dsrht\" (UID: \"b7997c32-6e00-4402-acfb-d3bf63227f0b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267048 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a96b097b-e9f2-4e75-a458-332b3000cae6-node-bootstrap-token\") pod \"machine-config-server-8ft6n\" (UID: \"a96b097b-e9f2-4e75-a458-332b3000cae6\") " pod="openshift-machine-config-operator/machine-config-server-8ft6n" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267072 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0dce9182-7f6f-48d8-a9bf-096fd7ca43ac-images\") pod \"machine-config-operator-74547568cd-ztrcm\" (UID: \"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267112 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94dc77e6-c491-4bda-a95f-6ab4892d06db-config-volume\") pod \"collect-profiles-29496525-tcxvt\" (UID: \"94dc77e6-c491-4bda-a95f-6ab4892d06db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267136 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee0a3d54-45e8-4e3b-9bed-bae82d409c21-srv-cert\") pod \"catalog-operator-68c6474976-9z79s\" (UID: \"ee0a3d54-45e8-4e3b-9bed-bae82d409c21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267161 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef68fb4-d9c6-484f-a05e-a8e5d3460a28-serving-cert\") pod \"console-operator-58897d9998-stgmg\" (UID: \"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28\") " pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267182 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9gxs\" (UniqueName: \"kubernetes.io/projected/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-kube-api-access-q9gxs\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267201 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ks6j\" (UniqueName: \"kubernetes.io/projected/0d15a27f-97a8-4c8e-8450-5266afa2d382-kube-api-access-2ks6j\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267228 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be8257d7-3aa4-406a-9f47-bda46f688e32-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mjwmm\" (UID: \"be8257d7-3aa4-406a-9f47-bda46f688e32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267248 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-plugins-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267266 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-csi-data-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267292 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267315 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35f19686-9d5d-470f-8431-24ba28e8237e-srv-cert\") pod \"olm-operator-6b444d44fb-7d4p5\" (UID: \"35f19686-9d5d-470f-8431-24ba28e8237e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267355 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kh6t\" (UniqueName: \"kubernetes.io/projected/8ef68fb4-d9c6-484f-a05e-a8e5d3460a28-kube-api-access-9kh6t\") pod \"console-operator-58897d9998-stgmg\" (UID: \"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28\") " pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267384 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/80518ae7-5ae1-40f4-8551-c97d8dfe4433-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s24dp\" (UID: \"80518ae7-5ae1-40f4-8551-c97d8dfe4433\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267409 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl87g\" (UniqueName: \"kubernetes.io/projected/38f0d965-f1ec-4d01-9155-d3740a9ce78f-kube-api-access-zl87g\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267429 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-etcd-ca\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267452 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-socket-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267476 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-etcd-client\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267505 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad3fff93-6553-4492-8bf6-03118aa9f089-metrics-tls\") pod \"dns-default-mfrmm\" (UID: \"ad3fff93-6553-4492-8bf6-03118aa9f089\") " pod="openshift-dns/dns-default-mfrmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267530 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d15a27f-97a8-4c8e-8450-5266afa2d382-stats-auth\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267554 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt2w5\" (UniqueName: \"kubernetes.io/projected/94dc77e6-c491-4bda-a95f-6ab4892d06db-kube-api-access-zt2w5\") pod \"collect-profiles-29496525-tcxvt\" (UID: \"94dc77e6-c491-4bda-a95f-6ab4892d06db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267574 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-registration-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267620 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd902c0b-6664-425d-ad65-dd2069a17fae-cert\") pod \"ingress-canary-nsbwm\" (UID: \"fd902c0b-6664-425d-ad65-dd2069a17fae\") " pod="openshift-ingress-canary/ingress-canary-nsbwm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267642 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cpzq\" (UniqueName: \"kubernetes.io/projected/53ad913d-a076-4972-93ae-1271d4c2ab76-kube-api-access-8cpzq\") pod \"migrator-59844c95c7-pwgd8\" (UID: \"53ad913d-a076-4972-93ae-1271d4c2ab76\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pwgd8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267664 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0dce9182-7f6f-48d8-a9bf-096fd7ca43ac-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ztrcm\" (UID: \"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267688 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n4qh\" (UniqueName: \"kubernetes.io/projected/a96b097b-e9f2-4e75-a458-332b3000cae6-kube-api-access-5n4qh\") pod \"machine-config-server-8ft6n\" (UID: \"a96b097b-e9f2-4e75-a458-332b3000cae6\") " pod="openshift-machine-config-operator/machine-config-server-8ft6n" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267707 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-serving-cert\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267728 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/38f0d965-f1ec-4d01-9155-d3740a9ce78f-audit-dir\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267749 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-mountpoint-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267783 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjlp7\" (UniqueName: \"kubernetes.io/projected/69db4421-c7a4-42f0-9138-e132dda1bd51-kube-api-access-hjlp7\") pod \"service-ca-9c57cc56f-pgmbb\" (UID: \"69db4421-c7a4-42f0-9138-e132dda1bd51\") " pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267809 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/38f0d965-f1ec-4d01-9155-d3740a9ce78f-encryption-config\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267839 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-etcd-service-ca\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267859 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2dfb8d2a-73aa-4723-b1aa-46346691c4c1-apiservice-cert\") pod \"packageserver-d55dfcdfc-69xn8\" (UID: \"2dfb8d2a-73aa-4723-b1aa-46346691c4c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267885 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d950d064-e8ae-47c8-adb8-cb60ba5bd5b9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmcnv\" (UID: \"d950d064-e8ae-47c8-adb8-cb60ba5bd5b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267915 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d15a27f-97a8-4c8e-8450-5266afa2d382-service-ca-bundle\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267937 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/69db4421-c7a4-42f0-9138-e132dda1bd51-signing-key\") pod \"service-ca-9c57cc56f-pgmbb\" (UID: \"69db4421-c7a4-42f0-9138-e132dda1bd51\") " pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267959 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ef68fb4-d9c6-484f-a05e-a8e5d3460a28-config\") pod \"console-operator-58897d9998-stgmg\" (UID: \"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28\") " pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267978 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-config\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.267999 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffpcz\" (UniqueName: \"kubernetes.io/projected/c315c604-594d-4069-823c-9859b87e22c7-kube-api-access-ffpcz\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrtj9\" (UID: \"c315c604-594d-4069-823c-9859b87e22c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268022 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be8257d7-3aa4-406a-9f47-bda46f688e32-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mjwmm\" (UID: \"be8257d7-3aa4-406a-9f47-bda46f688e32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268059 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt6hd\" (UniqueName: \"kubernetes.io/projected/69e24be4-7935-43ce-9815-ed1fa40e9933-kube-api-access-wt6hd\") pod \"downloads-7954f5f757-qc97s\" (UID: \"69e24be4-7935-43ce-9815-ed1fa40e9933\") " pod="openshift-console/downloads-7954f5f757-qc97s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268085 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zrzj\" (UniqueName: \"kubernetes.io/projected/b7997c32-6e00-4402-acfb-d3bf63227f0b-kube-api-access-7zrzj\") pod \"machine-config-controller-84d6567774-dsrht\" (UID: \"b7997c32-6e00-4402-acfb-d3bf63227f0b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268107 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpszd\" (UniqueName: \"kubernetes.io/projected/0dce9182-7f6f-48d8-a9bf-096fd7ca43ac-kube-api-access-dpszd\") pod \"machine-config-operator-74547568cd-ztrcm\" (UID: \"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268126 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad3fff93-6553-4492-8bf6-03118aa9f089-config-volume\") pod \"dns-default-mfrmm\" (UID: \"ad3fff93-6553-4492-8bf6-03118aa9f089\") " pod="openshift-dns/dns-default-mfrmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268146 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjkqc\" (UniqueName: \"kubernetes.io/projected/80518ae7-5ae1-40f4-8551-c97d8dfe4433-kube-api-access-fjkqc\") pod \"package-server-manager-789f6589d5-s24dp\" (UID: \"80518ae7-5ae1-40f4-8551-c97d8dfe4433\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268184 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/beaaba45-df33-4540-ab78-79f1dc92f87b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6hpsd\" (UID: \"beaaba45-df33-4540-ab78-79f1dc92f87b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268209 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee0a3d54-45e8-4e3b-9bed-bae82d409c21-profile-collector-cert\") pod \"catalog-operator-68c6474976-9z79s\" (UID: \"ee0a3d54-45e8-4e3b-9bed-bae82d409c21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268230 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8257d7-3aa4-406a-9f47-bda46f688e32-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mjwmm\" (UID: \"be8257d7-3aa4-406a-9f47-bda46f688e32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268252 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d15a27f-97a8-4c8e-8450-5266afa2d382-default-certificate\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268274 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45958d91-5d71-4ecc-9174-75d0d4e22f5d-config\") pod \"service-ca-operator-777779d784-r4zq5\" (UID: \"45958d91-5d71-4ecc-9174-75d0d4e22f5d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268295 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfbnj\" (UniqueName: \"kubernetes.io/projected/45958d91-5d71-4ecc-9174-75d0d4e22f5d-kube-api-access-qfbnj\") pod \"service-ca-operator-777779d784-r4zq5\" (UID: \"45958d91-5d71-4ecc-9174-75d0d4e22f5d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268318 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/16d079a0-8b15-4afe-b80b-29edde7f9251-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h4ql7\" (UID: \"16d079a0-8b15-4afe-b80b-29edde7f9251\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h4ql7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268335 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94dc77e6-c491-4bda-a95f-6ab4892d06db-config-volume\") pod \"collect-profiles-29496525-tcxvt\" (UID: \"94dc77e6-c491-4bda-a95f-6ab4892d06db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268346 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ce1959e-9d34-4221-8ede-5ec652b44b0d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2njb9\" (UID: \"0ce1959e-9d34-4221-8ede-5ec652b44b0d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268381 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rclr\" (UniqueName: \"kubernetes.io/projected/16d079a0-8b15-4afe-b80b-29edde7f9251-kube-api-access-5rclr\") pod \"multus-admission-controller-857f4d67dd-h4ql7\" (UID: \"16d079a0-8b15-4afe-b80b-29edde7f9251\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h4ql7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268404 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94dc77e6-c491-4bda-a95f-6ab4892d06db-secret-volume\") pod \"collect-profiles-29496525-tcxvt\" (UID: \"94dc77e6-c491-4bda-a95f-6ab4892d06db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268425 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhxj6\" (UniqueName: \"kubernetes.io/projected/beaaba45-df33-4540-ab78-79f1dc92f87b-kube-api-access-lhxj6\") pod \"marketplace-operator-79b997595-6hpsd\" (UID: \"beaaba45-df33-4540-ab78-79f1dc92f87b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268445 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c315c604-594d-4069-823c-9859b87e22c7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrtj9\" (UID: \"c315c604-594d-4069-823c-9859b87e22c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268469 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pps4g\" (UniqueName: \"kubernetes.io/projected/d950d064-e8ae-47c8-adb8-cb60ba5bd5b9-kube-api-access-pps4g\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmcnv\" (UID: \"d950d064-e8ae-47c8-adb8-cb60ba5bd5b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268491 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntskk\" (UniqueName: \"kubernetes.io/projected/35f19686-9d5d-470f-8431-24ba28e8237e-kube-api-access-ntskk\") pod \"olm-operator-6b444d44fb-7d4p5\" (UID: \"35f19686-9d5d-470f-8431-24ba28e8237e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268510 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2dfb8d2a-73aa-4723-b1aa-46346691c4c1-tmpfs\") pod \"packageserver-d55dfcdfc-69xn8\" (UID: \"2dfb8d2a-73aa-4723-b1aa-46346691c4c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268531 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ef68fb4-d9c6-484f-a05e-a8e5d3460a28-trusted-ca\") pod \"console-operator-58897d9998-stgmg\" (UID: \"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28\") " pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268553 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5twl6\" (UniqueName: \"kubernetes.io/projected/0ce1959e-9d34-4221-8ede-5ec652b44b0d-kube-api-access-5twl6\") pod \"control-plane-machine-set-operator-78cbb6b69f-2njb9\" (UID: \"0ce1959e-9d34-4221-8ede-5ec652b44b0d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.268928 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b7997c32-6e00-4402-acfb-d3bf63227f0b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dsrht\" (UID: \"b7997c32-6e00-4402-acfb-d3bf63227f0b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.271714 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c315c604-594d-4069-823c-9859b87e22c7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrtj9\" (UID: \"c315c604-594d-4069-823c-9859b87e22c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.272775 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/69db4421-c7a4-42f0-9138-e132dda1bd51-signing-cabundle\") pod \"service-ca-9c57cc56f-pgmbb\" (UID: \"69db4421-c7a4-42f0-9138-e132dda1bd51\") " pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.272845 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0dce9182-7f6f-48d8-a9bf-096fd7ca43ac-images\") pod \"machine-config-operator-74547568cd-ztrcm\" (UID: \"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.273830 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0dce9182-7f6f-48d8-a9bf-096fd7ca43ac-proxy-tls\") pod \"machine-config-operator-74547568cd-ztrcm\" (UID: \"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.273839 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2dfb8d2a-73aa-4723-b1aa-46346691c4c1-webhook-cert\") pod \"packageserver-d55dfcdfc-69xn8\" (UID: \"2dfb8d2a-73aa-4723-b1aa-46346691c4c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.274042 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a96b097b-e9f2-4e75-a458-332b3000cae6-certs\") pod \"machine-config-server-8ft6n\" (UID: \"a96b097b-e9f2-4e75-a458-332b3000cae6\") " pod="openshift-machine-config-operator/machine-config-server-8ft6n" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.274329 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d15a27f-97a8-4c8e-8450-5266afa2d382-metrics-certs\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.274777 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee0a3d54-45e8-4e3b-9bed-bae82d409c21-srv-cert\") pod \"catalog-operator-68c6474976-9z79s\" (UID: \"ee0a3d54-45e8-4e3b-9bed-bae82d409c21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.274946 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-etcd-service-ca\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.275644 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a96b097b-e9f2-4e75-a458-332b3000cae6-node-bootstrap-token\") pod \"machine-config-server-8ft6n\" (UID: \"a96b097b-e9f2-4e75-a458-332b3000cae6\") " pod="openshift-machine-config-operator/machine-config-server-8ft6n" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.276467 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/35f19686-9d5d-470f-8431-24ba28e8237e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7d4p5\" (UID: \"35f19686-9d5d-470f-8431-24ba28e8237e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.277466 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45958d91-5d71-4ecc-9174-75d0d4e22f5d-config\") pod \"service-ca-operator-777779d784-r4zq5\" (UID: \"45958d91-5d71-4ecc-9174-75d0d4e22f5d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.278301 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b7997c32-6e00-4402-acfb-d3bf63227f0b-proxy-tls\") pod \"machine-config-controller-84d6567774-dsrht\" (UID: \"b7997c32-6e00-4402-acfb-d3bf63227f0b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.278320 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d950d064-e8ae-47c8-adb8-cb60ba5bd5b9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmcnv\" (UID: \"d950d064-e8ae-47c8-adb8-cb60ba5bd5b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.278923 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-plugins-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.279028 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8257d7-3aa4-406a-9f47-bda46f688e32-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mjwmm\" (UID: \"be8257d7-3aa4-406a-9f47-bda46f688e32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.279318 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b4f3833-7619-485d-9cee-761a80d9f294-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-29blr\" (UID: \"1b4f3833-7619-485d-9cee-761a80d9f294\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.279404 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d950d064-e8ae-47c8-adb8-cb60ba5bd5b9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmcnv\" (UID: \"d950d064-e8ae-47c8-adb8-cb60ba5bd5b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.279662 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-config\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.280022 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-socket-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.280123 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad3fff93-6553-4492-8bf6-03118aa9f089-config-volume\") pod \"dns-default-mfrmm\" (UID: \"ad3fff93-6553-4492-8bf6-03118aa9f089\") " pod="openshift-dns/dns-default-mfrmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.280144 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-csi-data-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.280563 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-etcd-ca\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: E0130 16:58:52.280781 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:52.780768839 +0000 UTC m=+143.328132222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.280776 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ef68fb4-d9c6-484f-a05e-a8e5d3460a28-config\") pod \"console-operator-58897d9998-stgmg\" (UID: \"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28\") " pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.280903 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94dc77e6-c491-4bda-a95f-6ab4892d06db-secret-volume\") pod \"collect-profiles-29496525-tcxvt\" (UID: \"94dc77e6-c491-4bda-a95f-6ab4892d06db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.281976 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2dfb8d2a-73aa-4723-b1aa-46346691c4c1-tmpfs\") pod \"packageserver-d55dfcdfc-69xn8\" (UID: \"2dfb8d2a-73aa-4723-b1aa-46346691c4c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.281996 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/16d079a0-8b15-4afe-b80b-29edde7f9251-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h4ql7\" (UID: \"16d079a0-8b15-4afe-b80b-29edde7f9251\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h4ql7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.282306 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-registration-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.282530 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0dce9182-7f6f-48d8-a9bf-096fd7ca43ac-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ztrcm\" (UID: \"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.283266 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d15a27f-97a8-4c8e-8450-5266afa2d382-service-ca-bundle\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.283351 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/69db4421-c7a4-42f0-9138-e132dda1bd51-signing-key\") pod \"service-ca-9c57cc56f-pgmbb\" (UID: \"69db4421-c7a4-42f0-9138-e132dda1bd51\") " pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.283406 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/38f0d965-f1ec-4d01-9155-d3740a9ce78f-encryption-config\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.283456 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-mountpoint-dir\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.283517 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/38f0d965-f1ec-4d01-9155-d3740a9ce78f-audit-dir\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.284237 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c315c604-594d-4069-823c-9859b87e22c7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrtj9\" (UID: \"c315c604-594d-4069-823c-9859b87e22c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.284530 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ce1959e-9d34-4221-8ede-5ec652b44b0d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2njb9\" (UID: \"0ce1959e-9d34-4221-8ede-5ec652b44b0d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.284749 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/beaaba45-df33-4540-ab78-79f1dc92f87b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6hpsd\" (UID: \"beaaba45-df33-4540-ab78-79f1dc92f87b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.284753 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2dfb8d2a-73aa-4723-b1aa-46346691c4c1-apiservice-cert\") pod \"packageserver-d55dfcdfc-69xn8\" (UID: \"2dfb8d2a-73aa-4723-b1aa-46346691c4c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.285281 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-etcd-client\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.285468 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/beaaba45-df33-4540-ab78-79f1dc92f87b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6hpsd\" (UID: \"beaaba45-df33-4540-ab78-79f1dc92f87b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.285546 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0d15a27f-97a8-4c8e-8450-5266afa2d382-default-certificate\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.285733 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ef68fb4-d9c6-484f-a05e-a8e5d3460a28-serving-cert\") pod \"console-operator-58897d9998-stgmg\" (UID: \"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28\") " pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.285912 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/80518ae7-5ae1-40f4-8551-c97d8dfe4433-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s24dp\" (UID: \"80518ae7-5ae1-40f4-8551-c97d8dfe4433\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.286262 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0d15a27f-97a8-4c8e-8450-5266afa2d382-stats-auth\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.286732 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be8257d7-3aa4-406a-9f47-bda46f688e32-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mjwmm\" (UID: \"be8257d7-3aa4-406a-9f47-bda46f688e32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.286810 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/35f19686-9d5d-470f-8431-24ba28e8237e-srv-cert\") pod \"olm-operator-6b444d44fb-7d4p5\" (UID: \"35f19686-9d5d-470f-8431-24ba28e8237e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.287537 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-serving-cert\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.287920 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ad3fff93-6553-4492-8bf6-03118aa9f089-metrics-tls\") pod \"dns-default-mfrmm\" (UID: \"ad3fff93-6553-4492-8bf6-03118aa9f089\") " pod="openshift-dns/dns-default-mfrmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.295538 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/edaae5aa-0654-4349-9473-907e90886e59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4gqn8\" (UID: \"edaae5aa-0654-4349-9473-907e90886e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.301631 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ef68fb4-d9c6-484f-a05e-a8e5d3460a28-trusted-ca\") pod \"console-operator-58897d9998-stgmg\" (UID: \"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28\") " pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.306100 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45958d91-5d71-4ecc-9174-75d0d4e22f5d-serving-cert\") pod \"service-ca-operator-777779d784-r4zq5\" (UID: \"45958d91-5d71-4ecc-9174-75d0d4e22f5d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.306889 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd902c0b-6664-425d-ad65-dd2069a17fae-cert\") pod \"ingress-canary-nsbwm\" (UID: \"fd902c0b-6664-425d-ad65-dd2069a17fae\") " pod="openshift-ingress-canary/ingress-canary-nsbwm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.310406 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee0a3d54-45e8-4e3b-9bed-bae82d409c21-profile-collector-cert\") pod \"catalog-operator-68c6474976-9z79s\" (UID: \"ee0a3d54-45e8-4e3b-9bed-bae82d409c21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.311529 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsdjs\" (UniqueName: \"kubernetes.io/projected/37fa5454-ad47-4960-be87-5d9d4e4eab0f-kube-api-access-hsdjs\") pod \"console-f9d7485db-7s4zv\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.318054 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-bound-sa-token\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.336179 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8555787c-19c9-49c3-8b1a-7261cb693b97-bound-sa-token\") pod \"ingress-operator-5b745b69d9-scxjx\" (UID: \"8555787c-19c9-49c3-8b1a-7261cb693b97\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.357540 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndkxr\" (UniqueName: \"kubernetes.io/projected/a764d0e3-2762-4d13-b92e-30e68c104bf6-kube-api-access-ndkxr\") pod \"oauth-openshift-558db77b4-gv6jw\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.369188 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:52 crc kubenswrapper[4875]: E0130 16:58:52.369330 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:52.869308821 +0000 UTC m=+143.416672204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.369505 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: E0130 16:58:52.369859 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:52.869850708 +0000 UTC m=+143.417214091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.379141 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.404912 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crkrg\" (UniqueName: \"kubernetes.io/projected/c283ead9-a8b9-43ff-8188-5c583e3863f4-kube-api-access-crkrg\") pod \"machine-approver-56656f9798-v7xv7\" (UID: \"c283ead9-a8b9-43ff-8188-5c583e3863f4\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.415958 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5twl6\" (UniqueName: \"kubernetes.io/projected/0ce1959e-9d34-4221-8ede-5ec652b44b0d-kube-api-access-5twl6\") pod \"control-plane-machine-set-operator-78cbb6b69f-2njb9\" (UID: \"0ce1959e-9d34-4221-8ede-5ec652b44b0d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.434438 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f5xk\" (UniqueName: \"kubernetes.io/projected/1d8dcd63-7b87-47d3-84b8-3986857a6bc8-kube-api-access-6f5xk\") pod \"etcd-operator-b45778765-8sjsp\" (UID: \"1d8dcd63-7b87-47d3-84b8-3986857a6bc8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.437612 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.457432 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.462667 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdjjd\" (UniqueName: \"kubernetes.io/projected/ee0a3d54-45e8-4e3b-9bed-bae82d409c21-kube-api-access-vdjjd\") pod \"catalog-operator-68c6474976-9z79s\" (UID: \"ee0a3d54-45e8-4e3b-9bed-bae82d409c21\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.469323 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ht6ll" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.470319 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:52 crc kubenswrapper[4875]: E0130 16:58:52.470449 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:52.970418164 +0000 UTC m=+143.517781547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.476010 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: E0130 16:58:52.476690 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:52.976673108 +0000 UTC m=+143.524036491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.482260 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.483650 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d296\" (UniqueName: \"kubernetes.io/projected/ad3fff93-6553-4492-8bf6-03118aa9f089-kube-api-access-2d296\") pod \"dns-default-mfrmm\" (UID: \"ad3fff93-6553-4492-8bf6-03118aa9f089\") " pod="openshift-dns/dns-default-mfrmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.494514 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.506246 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvk7g\" (UniqueName: \"kubernetes.io/projected/fd902c0b-6664-425d-ad65-dd2069a17fae-kube-api-access-rvk7g\") pod \"ingress-canary-nsbwm\" (UID: \"fd902c0b-6664-425d-ad65-dd2069a17fae\") " pod="openshift-ingress-canary/ingress-canary-nsbwm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.532715 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td2kp\" (UniqueName: \"kubernetes.io/projected/2dfb8d2a-73aa-4723-b1aa-46346691c4c1-kube-api-access-td2kp\") pod \"packageserver-d55dfcdfc-69xn8\" (UID: \"2dfb8d2a-73aa-4723-b1aa-46346691c4c1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.538137 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.550519 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt6hd\" (UniqueName: \"kubernetes.io/projected/69e24be4-7935-43ce-9815-ed1fa40e9933-kube-api-access-wt6hd\") pod \"downloads-7954f5f757-qc97s\" (UID: \"69e24be4-7935-43ce-9815-ed1fa40e9933\") " pod="openshift-console/downloads-7954f5f757-qc97s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.571894 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.579135 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:52 crc kubenswrapper[4875]: E0130 16:58:52.579732 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:53.07971204 +0000 UTC m=+143.627075423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.579857 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.580567 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zrzj\" (UniqueName: \"kubernetes.io/projected/b7997c32-6e00-4402-acfb-d3bf63227f0b-kube-api-access-7zrzj\") pod \"machine-config-controller-84d6567774-dsrht\" (UID: \"b7997c32-6e00-4402-acfb-d3bf63227f0b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.588190 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.591414 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhxj6\" (UniqueName: \"kubernetes.io/projected/beaaba45-df33-4540-ab78-79f1dc92f87b-kube-api-access-lhxj6\") pod \"marketplace-operator-79b997595-6hpsd\" (UID: \"beaaba45-df33-4540-ab78-79f1dc92f87b\") " pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.597250 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pps4g\" (UniqueName: \"kubernetes.io/projected/d950d064-e8ae-47c8-adb8-cb60ba5bd5b9-kube-api-access-pps4g\") pod \"kube-storage-version-migrator-operator-b67b599dd-nmcnv\" (UID: \"d950d064-e8ae-47c8-adb8-cb60ba5bd5b9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.606480 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.613933 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ks6j\" (UniqueName: \"kubernetes.io/projected/0d15a27f-97a8-4c8e-8450-5266afa2d382-kube-api-access-2ks6j\") pod \"router-default-5444994796-5v2bh\" (UID: \"0d15a27f-97a8-4c8e-8450-5266afa2d382\") " pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.641831 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n4qh\" (UniqueName: \"kubernetes.io/projected/a96b097b-e9f2-4e75-a458-332b3000cae6-kube-api-access-5n4qh\") pod \"machine-config-server-8ft6n\" (UID: \"a96b097b-e9f2-4e75-a458-332b3000cae6\") " pod="openshift-machine-config-operator/machine-config-server-8ft6n" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.678978 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.679541 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8ft6n" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.683999 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: E0130 16:58:52.684345 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:53.184332521 +0000 UTC m=+143.731695904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.686458 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nsbwm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.688395 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl87g\" (UniqueName: \"kubernetes.io/projected/38f0d965-f1ec-4d01-9155-d3740a9ce78f-kube-api-access-zl87g\") pod \"apiserver-76f77b778f-2d4sj\" (UID: \"38f0d965-f1ec-4d01-9155-d3740a9ce78f\") " pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.691196 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be8257d7-3aa4-406a-9f47-bda46f688e32-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mjwmm\" (UID: \"be8257d7-3aa4-406a-9f47-bda46f688e32\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.717364 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kh6t\" (UniqueName: \"kubernetes.io/projected/8ef68fb4-d9c6-484f-a05e-a8e5d3460a28-kube-api-access-9kh6t\") pod \"console-operator-58897d9998-stgmg\" (UID: \"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28\") " pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.717400 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mfrmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.742012 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.751652 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfbnj\" (UniqueName: \"kubernetes.io/projected/45958d91-5d71-4ecc-9174-75d0d4e22f5d-kube-api-access-qfbnj\") pod \"service-ca-operator-777779d784-r4zq5\" (UID: \"45958d91-5d71-4ecc-9174-75d0d4e22f5d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.755434 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntskk\" (UniqueName: \"kubernetes.io/projected/35f19686-9d5d-470f-8431-24ba28e8237e-kube-api-access-ntskk\") pod \"olm-operator-6b444d44fb-7d4p5\" (UID: \"35f19686-9d5d-470f-8431-24ba28e8237e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.782170 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffpcz\" (UniqueName: \"kubernetes.io/projected/c315c604-594d-4069-823c-9859b87e22c7-kube-api-access-ffpcz\") pod \"openshift-controller-manager-operator-756b6f6bc6-jrtj9\" (UID: \"c315c604-594d-4069-823c-9859b87e22c7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.782485 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpszd\" (UniqueName: \"kubernetes.io/projected/0dce9182-7f6f-48d8-a9bf-096fd7ca43ac-kube-api-access-dpszd\") pod \"machine-config-operator-74547568cd-ztrcm\" (UID: \"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.792935 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:52 crc kubenswrapper[4875]: E0130 16:58:52.793303 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:53.293285528 +0000 UTC m=+143.840648911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.809313 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.809772 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.812530 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjlp7\" (UniqueName: \"kubernetes.io/projected/69db4421-c7a4-42f0-9138-e132dda1bd51-kube-api-access-hjlp7\") pod \"service-ca-9c57cc56f-pgmbb\" (UID: \"69db4421-c7a4-42f0-9138-e132dda1bd51\") " pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.817683 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjkqc\" (UniqueName: \"kubernetes.io/projected/80518ae7-5ae1-40f4-8551-c97d8dfe4433-kube-api-access-fjkqc\") pod \"package-server-manager-789f6589d5-s24dp\" (UID: \"80518ae7-5ae1-40f4-8551-c97d8dfe4433\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.817941 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.824160 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qc97s" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.831100 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.853509 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9gxs\" (UniqueName: \"kubernetes.io/projected/9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83-kube-api-access-q9gxs\") pod \"csi-hostpathplugin-5v28g\" (UID: \"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83\") " pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.854799 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rclr\" (UniqueName: \"kubernetes.io/projected/16d079a0-8b15-4afe-b80b-29edde7f9251-kube-api-access-5rclr\") pod \"multus-admission-controller-857f4d67dd-h4ql7\" (UID: \"16d079a0-8b15-4afe-b80b-29edde7f9251\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h4ql7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.856344 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.865412 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.869178 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gv6jw"] Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.894125 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:52 crc kubenswrapper[4875]: E0130 16:58:52.894406 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:53.394395069 +0000 UTC m=+143.941758452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.898725 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-h4ql7" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.903132 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cpzq\" (UniqueName: \"kubernetes.io/projected/53ad913d-a076-4972-93ae-1271d4c2ab76-kube-api-access-8cpzq\") pod \"migrator-59844c95c7-pwgd8\" (UID: \"53ad913d-a076-4972-93ae-1271d4c2ab76\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pwgd8" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.915324 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt2w5\" (UniqueName: \"kubernetes.io/projected/94dc77e6-c491-4bda-a95f-6ab4892d06db-kube-api-access-zt2w5\") pod \"collect-profiles-29496525-tcxvt\" (UID: \"94dc77e6-c491-4bda-a95f-6ab4892d06db\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.915961 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.928849 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.940777 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.960497 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.962618 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.966863 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.976272 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" event={"ID":"c283ead9-a8b9-43ff-8188-5c583e3863f4","Type":"ContainerStarted","Data":"c3a4f33688b759225b9d5e4d83c15fbcbcff5d7b6eb9240c1c174c340dfe3f39"} Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.978501 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8ft6n" event={"ID":"a96b097b-e9f2-4e75-a458-332b3000cae6","Type":"ContainerStarted","Data":"0699b6ccb95417f357503f280daf843657a6ca976f1cd2fe3b04cc11fbba0ed3"} Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.980395 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" event={"ID":"d01c20ec-32e4-4ffe-af84-a7e75df66733","Type":"ContainerStarted","Data":"58d1a69418880edd9c0d24085e4a87319747061c4ece9993d49f34556da87faa"} Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.980423 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" event={"ID":"d01c20ec-32e4-4ffe-af84-a7e75df66733","Type":"ContainerStarted","Data":"9fa2a46c2d8edc9116ab312ad2473bd45f1fb19e89a4bcf64c1efc2488efb6e3"} Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.986975 4875 generic.go:334] "Generic (PLEG): container finished" podID="fa7f2369-f741-4a6e-af2c-4ead754f7ea4" containerID="7cf2059c165fc92b477509b21fc14f1b183bd62439a68c8a6357795b57d33d49" exitCode=0 Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.987335 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" event={"ID":"fa7f2369-f741-4a6e-af2c-4ead754f7ea4","Type":"ContainerDied","Data":"7cf2059c165fc92b477509b21fc14f1b183bd62439a68c8a6357795b57d33d49"} Jan 30 16:58:52 crc kubenswrapper[4875]: I0130 16:58:52.987402 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" event={"ID":"fa7f2369-f741-4a6e-af2c-4ead754f7ea4","Type":"ContainerStarted","Data":"b1c2a48e37d2a05078dd1965d8fabde70ea13f3d29ba91d97abf82b8823b4ecf"} Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:52.993813 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" event={"ID":"f4d2781f-afa7-44e3-967b-08aaea623583","Type":"ContainerStarted","Data":"ea5bb820dff5fe2934eead876c70b2bd3661956b61e6f8970e51676b2b3feafd"} Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:52.994382 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:52.994910 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:53 crc kubenswrapper[4875]: E0130 16:58:52.995514 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:53.495496312 +0000 UTC m=+144.042859695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.026399 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2" event={"ID":"bfaa9666-5e7d-4a64-8bc5-1936748f9375","Type":"ContainerStarted","Data":"84906ddac1107c3984e9e7c7e457774547038058b6dbcb39e003e1b006360683"} Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.026448 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2" event={"ID":"bfaa9666-5e7d-4a64-8bc5-1936748f9375","Type":"ContainerStarted","Data":"8efc12633f092352b28c90db31ebc5f55d7d19055f3086c01a0c9f281a53fd18"} Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.026457 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2" event={"ID":"bfaa9666-5e7d-4a64-8bc5-1936748f9375","Type":"ContainerStarted","Data":"53cbbff5356410d389d1b036779ab6d7ab167984a2ab2998200108d963c4d6d2"} Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.027071 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5v28g" Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.036323 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" event={"ID":"50a47f63-146d-4621-8bd2-fdb469f0fc8a","Type":"ContainerStarted","Data":"35c70b5a7255cf05c53526f129c29890ad706ffbb8e0df433f0042cb640a5804"} Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.036363 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" event={"ID":"50a47f63-146d-4621-8bd2-fdb469f0fc8a","Type":"ContainerStarted","Data":"0fd087fed94913e9d68d16e078282a97ab9abdddd5cc7d0a3485ff6d49104807"} Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.044246 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5v2bh" event={"ID":"0d15a27f-97a8-4c8e-8450-5266afa2d382","Type":"ContainerStarted","Data":"30866e3e700b324c5ea2c81e159106b382fa0a65234c760e1ef7d25fb3cc8395"} Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.082520 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7s4zv"] Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.100785 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:53 crc kubenswrapper[4875]: E0130 16:58:53.108956 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:53.608930937 +0000 UTC m=+144.156294320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.147787 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pwgd8" Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.206538 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:53 crc kubenswrapper[4875]: E0130 16:58:53.206927 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:53.706912353 +0000 UTC m=+144.254275736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.308086 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:53 crc kubenswrapper[4875]: E0130 16:58:53.309215 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:53.809202561 +0000 UTC m=+144.356565934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.409741 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:53 crc kubenswrapper[4875]: E0130 16:58:53.410125 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:53.910109058 +0000 UTC m=+144.457472441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.517468 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:53 crc kubenswrapper[4875]: E0130 16:58:53.518949 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:54.018933639 +0000 UTC m=+144.566297032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.623929 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:53 crc kubenswrapper[4875]: E0130 16:58:53.624324 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:54.124297094 +0000 UTC m=+144.671660477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.698090 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" podStartSLOduration=123.698065529 podStartE2EDuration="2m3.698065529s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:53.697439481 +0000 UTC m=+144.244802854" watchObservedRunningTime="2026-01-30 16:58:53.698065529 +0000 UTC m=+144.245428912" Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.725951 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:53 crc kubenswrapper[4875]: E0130 16:58:53.726353 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:54.226341225 +0000 UTC m=+144.773704608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.827218 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:53 crc kubenswrapper[4875]: E0130 16:58:53.827711 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:54.327690126 +0000 UTC m=+144.875053519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.906847 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" podStartSLOduration=123.906832828 podStartE2EDuration="2m3.906832828s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:53.90594671 +0000 UTC m=+144.453310093" watchObservedRunningTime="2026-01-30 16:58:53.906832828 +0000 UTC m=+144.454196211" Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.929246 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:53 crc kubenswrapper[4875]: E0130 16:58:53.929764 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:54.429745858 +0000 UTC m=+144.977109241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.985501 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gtwl2" podStartSLOduration=123.985481545 podStartE2EDuration="2m3.985481545s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:53.984396201 +0000 UTC m=+144.531759584" watchObservedRunningTime="2026-01-30 16:58:53.985481545 +0000 UTC m=+144.532844928" Jan 30 16:58:53 crc kubenswrapper[4875]: I0130 16:58:53.986456 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" podStartSLOduration=123.986451414 podStartE2EDuration="2m3.986451414s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:53.936059613 +0000 UTC m=+144.483423006" watchObservedRunningTime="2026-01-30 16:58:53.986451414 +0000 UTC m=+144.533814797" Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.031128 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:54 crc kubenswrapper[4875]: E0130 16:58:54.031471 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:54.531456139 +0000 UTC m=+145.078819522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.077786 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5v2bh" event={"ID":"0d15a27f-97a8-4c8e-8450-5266afa2d382","Type":"ContainerStarted","Data":"d99b9a577a18955df6c8de4df95a5796b3e4870ab578f49e511d637c5c0308ed"} Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.098375 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7s4zv" event={"ID":"37fa5454-ad47-4960-be87-5d9d4e4eab0f","Type":"ContainerStarted","Data":"7b2bdbbeadc8800eb70ba36d1807dcfb88b324469fac8765274d0e7bea5a7d46"} Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.098438 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7s4zv" event={"ID":"37fa5454-ad47-4960-be87-5d9d4e4eab0f","Type":"ContainerStarted","Data":"7cecdaebedeb9d659dc44a872680c8161e985be854bf31e31b9c7da69133a52f"} Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.101527 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" event={"ID":"c283ead9-a8b9-43ff-8188-5c583e3863f4","Type":"ContainerStarted","Data":"f7d6900d2410472df17fc66eaaf4af00974688af5a1ebba7e386b89db3134069"} Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.129940 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8ft6n" event={"ID":"a96b097b-e9f2-4e75-a458-332b3000cae6","Type":"ContainerStarted","Data":"14028cc060e43fcac54972c0d16269654a9c800e71f6a5c56d251dce9656107a"} Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.143401 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:54 crc kubenswrapper[4875]: E0130 16:58:54.145710 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:54.645686558 +0000 UTC m=+145.193049951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.152375 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-j2q7s" podStartSLOduration=124.152355675 podStartE2EDuration="2m4.152355675s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:54.128944249 +0000 UTC m=+144.676307632" watchObservedRunningTime="2026-01-30 16:58:54.152355675 +0000 UTC m=+144.699719058" Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.245133 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:54 crc kubenswrapper[4875]: E0130 16:58:54.247052 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:54.747029008 +0000 UTC m=+145.294392391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.247564 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" event={"ID":"a764d0e3-2762-4d13-b92e-30e68c104bf6","Type":"ContainerStarted","Data":"2fef2e01831b17a5d310ab2236793ede88081f6378f05d6f9be272312407298f"} Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.267245 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mhw2" podStartSLOduration=124.267226814 podStartE2EDuration="2m4.267226814s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:54.24678231 +0000 UTC m=+144.794145713" watchObservedRunningTime="2026-01-30 16:58:54.267226814 +0000 UTC m=+144.814590197" Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.356726 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:54 crc kubenswrapper[4875]: E0130 16:58:54.357162 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:54.85714727 +0000 UTC m=+145.404510653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.428488 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr"] Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.451845 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8"] Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.463577 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:54 crc kubenswrapper[4875]: E0130 16:58:54.463885 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:54.963867826 +0000 UTC m=+145.511231209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.464064 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:54 crc kubenswrapper[4875]: E0130 16:58:54.464485 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:54.964474515 +0000 UTC m=+145.511837898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.475260 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx"] Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.565327 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:54 crc kubenswrapper[4875]: E0130 16:58:54.566103 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.066083953 +0000 UTC m=+145.613447336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.571751 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8"] Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.576381 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8sjsp"] Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.581726 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6hpsd"] Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.634620 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s"] Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.673988 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:54 crc kubenswrapper[4875]: E0130 16:58:54.674274 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.174262215 +0000 UTC m=+145.721625598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.675133 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ht6ll"] Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.695961 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8sjn" podStartSLOduration=124.695940576 podStartE2EDuration="2m4.695940576s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:54.689802176 +0000 UTC m=+145.237165559" watchObservedRunningTime="2026-01-30 16:58:54.695940576 +0000 UTC m=+145.243303969" Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.724771 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wxc56" podStartSLOduration=124.724751499 podStartE2EDuration="2m4.724751499s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:54.723532601 +0000 UTC m=+145.270895984" watchObservedRunningTime="2026-01-30 16:58:54.724751499 +0000 UTC m=+145.272114882" Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.788300 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:54 crc kubenswrapper[4875]: E0130 16:58:54.789919 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.289904557 +0000 UTC m=+145.837267940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.796489 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9"] Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.796721 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:54 crc kubenswrapper[4875]: E0130 16:58:54.797144 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.297133622 +0000 UTC m=+145.844497005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.810997 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.816442 4875 patch_prober.go:28] interesting pod/router-default-5444994796-5v2bh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:58:54 crc kubenswrapper[4875]: [-]has-synced failed: reason withheld Jan 30 16:58:54 crc kubenswrapper[4875]: [+]process-running ok Jan 30 16:58:54 crc kubenswrapper[4875]: healthz check failed Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.816483 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5v2bh" podUID="0d15a27f-97a8-4c8e-8450-5266afa2d382" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.832113 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-7s4zv" podStartSLOduration=124.832094154 podStartE2EDuration="2m4.832094154s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:54.822500278 +0000 UTC m=+145.369863661" watchObservedRunningTime="2026-01-30 16:58:54.832094154 +0000 UTC m=+145.379457537" Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.833663 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2d4sj"] Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.856184 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nsbwm"] Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.874709 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5v2bh" podStartSLOduration=124.874622712 podStartE2EDuration="2m4.874622712s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:54.862671842 +0000 UTC m=+145.410035225" watchObservedRunningTime="2026-01-30 16:58:54.874622712 +0000 UTC m=+145.421986095" Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.902357 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:54 crc kubenswrapper[4875]: E0130 16:58:54.903920 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.40390629 +0000 UTC m=+145.951269673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.904079 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:54 crc kubenswrapper[4875]: E0130 16:58:54.904352 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.404344763 +0000 UTC m=+145.951708146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.917105 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8ft6n" podStartSLOduration=5.917081898 podStartE2EDuration="5.917081898s" podCreationTimestamp="2026-01-30 16:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:54.901855136 +0000 UTC m=+145.449218529" watchObservedRunningTime="2026-01-30 16:58:54.917081898 +0000 UTC m=+145.464445281" Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.951741 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" podStartSLOduration=124.951722062 podStartE2EDuration="2m4.951722062s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:54.950435261 +0000 UTC m=+145.497798644" watchObservedRunningTime="2026-01-30 16:58:54.951722062 +0000 UTC m=+145.499085445" Jan 30 16:58:54 crc kubenswrapper[4875]: I0130 16:58:54.981040 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mfrmm"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.012243 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:55 crc kubenswrapper[4875]: E0130 16:58:55.012346 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.512328849 +0000 UTC m=+146.059692232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.012645 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:55 crc kubenswrapper[4875]: E0130 16:58:55.012927 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.512920598 +0000 UTC m=+146.060283981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:55 crc kubenswrapper[4875]: W0130 16:58:55.044237 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad3fff93_6553_4492_8bf6_03118aa9f089.slice/crio-88452e116401dcd50daaf6117141cfbba5247bc201a8079e7951a2e9a7c02411 WatchSource:0}: Error finding container 88452e116401dcd50daaf6117141cfbba5247bc201a8079e7951a2e9a7c02411: Status 404 returned error can't find the container with id 88452e116401dcd50daaf6117141cfbba5247bc201a8079e7951a2e9a7c02411 Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.113572 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:55 crc kubenswrapper[4875]: E0130 16:58:55.113957 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.613939728 +0000 UTC m=+146.161303111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.203426 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.218812 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:55 crc kubenswrapper[4875]: E0130 16:58:55.219280 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.719264061 +0000 UTC m=+146.266627444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.219745 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ht6ll" event={"ID":"ac862908-f2bf-42a2-b453-12f722f2cae3","Type":"ContainerStarted","Data":"1cd709b52d9d68563d506b714a7c79b68e1dd48ae498cb31f271b145ec0c7e6f"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.227650 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pgmbb"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.241167 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.280716 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" event={"ID":"fa7f2369-f741-4a6e-af2c-4ead754f7ea4","Type":"ContainerStarted","Data":"ba0024f24e82b6fbb5537f8365049f9f192e8e58bff0075d3f159ad1c70e2f2e"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.280801 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.298792 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.307445 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" event={"ID":"c283ead9-a8b9-43ff-8188-5c583e3863f4","Type":"ContainerStarted","Data":"a0bdc3d114e8e5260789bd15bb943d333bac43584916897263651511d3522524"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.320535 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:55 crc kubenswrapper[4875]: E0130 16:58:55.321005 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.820986472 +0000 UTC m=+146.368349845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.321260 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:55 crc kubenswrapper[4875]: E0130 16:58:55.321528 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.821520748 +0000 UTC m=+146.368884131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.324238 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" event={"ID":"2dfb8d2a-73aa-4723-b1aa-46346691c4c1","Type":"ContainerStarted","Data":"978c0b543acc9399d8e4ef6eebb6c32998866761a68212e07bfec24d71ede6b4"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.324294 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" event={"ID":"2dfb8d2a-73aa-4723-b1aa-46346691c4c1","Type":"ContainerStarted","Data":"458bd0def66c74557c545176971729fea76422535a4b743ca327af2bf93c925e"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.327782 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.328850 4875 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-69xn8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.328892 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" podUID="2dfb8d2a-73aa-4723-b1aa-46346691c4c1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.351523 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5v28g"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.355808 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mfrmm" event={"ID":"ad3fff93-6553-4492-8bf6-03118aa9f089","Type":"ContainerStarted","Data":"88452e116401dcd50daaf6117141cfbba5247bc201a8079e7951a2e9a7c02411"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.359566 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pwgd8"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.359624 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v7xv7" podStartSLOduration=125.359604369 podStartE2EDuration="2m5.359604369s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:55.345788271 +0000 UTC m=+145.893151644" watchObservedRunningTime="2026-01-30 16:58:55.359604369 +0000 UTC m=+145.906967752" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.376510 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.388962 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9" event={"ID":"0ce1959e-9d34-4221-8ede-5ec652b44b0d","Type":"ContainerStarted","Data":"d879bcb9603a1effa84b6ba106a91e934ded24c9604f69c3f2a2fd2f56f1b6c6"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.403232 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" event={"ID":"beaaba45-df33-4540-ab78-79f1dc92f87b","Type":"ContainerStarted","Data":"dc68a31a351ff1c2c90c9e1fa1861fbf7af9afeefb86abfb77c1fd1d96523cc0"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.403290 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" event={"ID":"beaaba45-df33-4540-ab78-79f1dc92f87b","Type":"ContainerStarted","Data":"3ced7b6e312dd7257e22f25de89481dd89ab4d8533d563d4b3471998e45c09e8"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.404065 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.404930 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" podStartSLOduration=125.404910803 podStartE2EDuration="2m5.404910803s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:55.370086853 +0000 UTC m=+145.917450226" watchObservedRunningTime="2026-01-30 16:58:55.404910803 +0000 UTC m=+145.952274186" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.409669 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" event={"ID":"1b4f3833-7619-485d-9cee-761a80d9f294","Type":"ContainerStarted","Data":"34ea2ae5300600bbec7b226be5391f0b81df945bafbb5b2dfbd726f999e8b5c2"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.409709 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" event={"ID":"1b4f3833-7619-485d-9cee-761a80d9f294","Type":"ContainerStarted","Data":"6d410164dc20cd2892ed0f85c058dd8abddd480c94a6601edb461cc89c43c205"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.450187 4875 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6hpsd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.450736 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" podUID="beaaba45-df33-4540-ab78-79f1dc92f87b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.451818 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-stgmg"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.453005 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:55 crc kubenswrapper[4875]: E0130 16:58:55.471008 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.969231535 +0000 UTC m=+146.516594928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.474653 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.485897 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.490658 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" event={"ID":"38f0d965-f1ec-4d01-9155-d3740a9ce78f","Type":"ContainerStarted","Data":"f26732c83ca215ce6b506a391e110cfe6a8aee160552a878d1ffad4cfc5f3913"} Jan 30 16:58:55 crc kubenswrapper[4875]: E0130 16:58:55.493121 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:55.993098434 +0000 UTC m=+146.540461817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.493745 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.506164 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.508783 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9" podStartSLOduration=125.5087637 podStartE2EDuration="2m5.5087637s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:55.409276057 +0000 UTC m=+145.956639450" watchObservedRunningTime="2026-01-30 16:58:55.5087637 +0000 UTC m=+146.056127083" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.528163 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h4ql7"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.535359 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.537086 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" podStartSLOduration=125.537067346 podStartE2EDuration="2m5.537067346s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:55.491191536 +0000 UTC m=+146.038554919" watchObservedRunningTime="2026-01-30 16:58:55.537067346 +0000 UTC m=+146.084430729" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.539714 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qc97s"] Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.545740 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" event={"ID":"1d8dcd63-7b87-47d3-84b8-3986857a6bc8","Type":"ContainerStarted","Data":"f69ce87c28bc21b8c943a2d0063172ba877fb37148a1ba21e086faa3700a844e"} Jan 30 16:58:55 crc kubenswrapper[4875]: W0130 16:58:55.552056 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d079a0_8b15_4afe_b80b_29edde7f9251.slice/crio-49cd84f957716251b70ba165eff7426154f44f08714eba75f846a18e6b4377a3 WatchSource:0}: Error finding container 49cd84f957716251b70ba165eff7426154f44f08714eba75f846a18e6b4377a3: Status 404 returned error can't find the container with id 49cd84f957716251b70ba165eff7426154f44f08714eba75f846a18e6b4377a3 Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.559069 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-29blr" podStartSLOduration=125.559049468 podStartE2EDuration="2m5.559049468s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:55.532541686 +0000 UTC m=+146.079905069" watchObservedRunningTime="2026-01-30 16:58:55.559049468 +0000 UTC m=+146.106412851" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.567397 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" event={"ID":"ee0a3d54-45e8-4e3b-9bed-bae82d409c21","Type":"ContainerStarted","Data":"1452fe3d5a5d748343ff0dd9e7edd728ca6a2fa083d04e03688b15a8c7527d43"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.567525 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" event={"ID":"ee0a3d54-45e8-4e3b-9bed-bae82d409c21","Type":"ContainerStarted","Data":"efbf4ad64482bbc410bcc4a3470d8994f7306ab0a072fb5998fee2d69b9c3d19"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.569533 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.575568 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" podStartSLOduration=125.575552279 podStartE2EDuration="2m5.575552279s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:55.573393412 +0000 UTC m=+146.120756795" watchObservedRunningTime="2026-01-30 16:58:55.575552279 +0000 UTC m=+146.122915662" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.587717 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:55 crc kubenswrapper[4875]: E0130 16:58:55.588944 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:56.088923563 +0000 UTC m=+146.636286946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.589540 4875 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9z79s container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.589573 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" podUID="ee0a3d54-45e8-4e3b-9bed-bae82d409c21" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.606535 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" event={"ID":"a764d0e3-2762-4d13-b92e-30e68c104bf6","Type":"ContainerStarted","Data":"4d3ea55d59a5904fcd2b94de812a53a149c4e4deb5cc2e371f131b8f105e1208"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.606916 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.607158 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" podStartSLOduration=125.607144237 podStartE2EDuration="2m5.607144237s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:55.606335853 +0000 UTC m=+146.153699236" watchObservedRunningTime="2026-01-30 16:58:55.607144237 +0000 UTC m=+146.154507610" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.621859 4875 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-gv6jw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.621918 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" podUID="a764d0e3-2762-4d13-b92e-30e68c104bf6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.640463 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" event={"ID":"8555787c-19c9-49c3-8b1a-7261cb693b97","Type":"ContainerStarted","Data":"8e2bd67bb88256bb306ea284dc866f4499913e842b62d327f2cabaa031cd43b6"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.640518 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" event={"ID":"8555787c-19c9-49c3-8b1a-7261cb693b97","Type":"ContainerStarted","Data":"d52d17e9f684d1dacc3e7cdc8188e52574680768650f4ba61c55b0b05a92b55a"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.641250 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" podStartSLOduration=125.641234274 podStartE2EDuration="2m5.641234274s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:55.640252124 +0000 UTC m=+146.187615507" watchObservedRunningTime="2026-01-30 16:58:55.641234274 +0000 UTC m=+146.188597657" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.657189 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" event={"ID":"edaae5aa-0654-4349-9473-907e90886e59","Type":"ContainerStarted","Data":"b6abf68bc862d06ffc9c0867b39faa345f7e8982a8821f274b2754d19f9533d0"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.666163 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" podStartSLOduration=125.666149766 podStartE2EDuration="2m5.666149766s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:55.665903188 +0000 UTC m=+146.213266571" watchObservedRunningTime="2026-01-30 16:58:55.666149766 +0000 UTC m=+146.213513149" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.694607 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:55 crc kubenswrapper[4875]: E0130 16:58:55.696333 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:56.196313681 +0000 UTC m=+146.743677064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.726880 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nsbwm" event={"ID":"fd902c0b-6664-425d-ad65-dd2069a17fae","Type":"ContainerStarted","Data":"219ad3c5b1a2afb74b833a262f472524c49a0bb5111857e22fb67c88e0b68a1c"} Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.772073 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qrng" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.773515 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" podStartSLOduration=125.773504742 podStartE2EDuration="2m5.773504742s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:55.72825653 +0000 UTC m=+146.275619913" watchObservedRunningTime="2026-01-30 16:58:55.773504742 +0000 UTC m=+146.320868125" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.775568 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-nsbwm" podStartSLOduration=6.775560835 podStartE2EDuration="6.775560835s" podCreationTimestamp="2026-01-30 16:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:55.772405438 +0000 UTC m=+146.319768821" watchObservedRunningTime="2026-01-30 16:58:55.775560835 +0000 UTC m=+146.322924218" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.808440 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:55 crc kubenswrapper[4875]: E0130 16:58:55.808780 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:56.308761304 +0000 UTC m=+146.856124687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.842808 4875 patch_prober.go:28] interesting pod/router-default-5444994796-5v2bh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:58:55 crc kubenswrapper[4875]: [-]has-synced failed: reason withheld Jan 30 16:58:55 crc kubenswrapper[4875]: [+]process-running ok Jan 30 16:58:55 crc kubenswrapper[4875]: healthz check failed Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.842865 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5v2bh" podUID="0d15a27f-97a8-4c8e-8450-5266afa2d382" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:58:55 crc kubenswrapper[4875]: I0130 16:58:55.909604 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:55 crc kubenswrapper[4875]: E0130 16:58:55.911504 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:56.411473767 +0000 UTC m=+146.958837150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.011237 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:56 crc kubenswrapper[4875]: E0130 16:58:56.012118 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:56.512102334 +0000 UTC m=+147.059465717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.092956 4875 csr.go:261] certificate signing request csr-c4rmt is approved, waiting to be issued Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.105670 4875 csr.go:257] certificate signing request csr-c4rmt is issued Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.113659 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:56 crc kubenswrapper[4875]: E0130 16:58:56.114233 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:56.614217278 +0000 UTC m=+147.161580661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.215775 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:56 crc kubenswrapper[4875]: E0130 16:58:56.216446 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:56.716429615 +0000 UTC m=+147.263792998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.295743 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.295812 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.320422 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:56 crc kubenswrapper[4875]: E0130 16:58:56.320835 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:56.82082227 +0000 UTC m=+147.368185653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.421620 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:56 crc kubenswrapper[4875]: E0130 16:58:56.422053 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:56.922034625 +0000 UTC m=+147.469398008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.526600 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:56 crc kubenswrapper[4875]: E0130 16:58:56.527226 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.027215354 +0000 UTC m=+147.574578727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.627988 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:56 crc kubenswrapper[4875]: E0130 16:58:56.628307 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.128287416 +0000 UTC m=+147.675650799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.728618 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:56 crc kubenswrapper[4875]: E0130 16:58:56.729352 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.229332766 +0000 UTC m=+147.776696149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.759275 4875 generic.go:334] "Generic (PLEG): container finished" podID="38f0d965-f1ec-4d01-9155-d3740a9ce78f" containerID="1b75b258e83d23a5aaf161f370ad737972b0ddf7bbdd9ff3ca6333f83aa072a3" exitCode=0 Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.760226 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" event={"ID":"38f0d965-f1ec-4d01-9155-d3740a9ce78f","Type":"ContainerDied","Data":"1b75b258e83d23a5aaf161f370ad737972b0ddf7bbdd9ff3ca6333f83aa072a3"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.764551 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h4ql7" event={"ID":"16d079a0-8b15-4afe-b80b-29edde7f9251","Type":"ContainerStarted","Data":"f0e1d1b678df18e829339dd44d126ae5de9df092d1a1b806e1546b9e77868fdc"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.764624 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h4ql7" event={"ID":"16d079a0-8b15-4afe-b80b-29edde7f9251","Type":"ContainerStarted","Data":"49cd84f957716251b70ba165eff7426154f44f08714eba75f846a18e6b4377a3"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.769965 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mfrmm" event={"ID":"ad3fff93-6553-4492-8bf6-03118aa9f089","Type":"ContainerStarted","Data":"ff18f8ea51d9641e82b3036bbfa9198807c05341e78f8cfe09230e6ee17dab24"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.770020 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mfrmm" event={"ID":"ad3fff93-6553-4492-8bf6-03118aa9f089","Type":"ContainerStarted","Data":"c097e15ee1ef0fb86386d588aa9a3b6041989d07a12614927e5208d344ccfbf4"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.770069 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-mfrmm" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.787307 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" event={"ID":"94dc77e6-c491-4bda-a95f-6ab4892d06db","Type":"ContainerStarted","Data":"0020f39d9d126bbe926efda0d8e2cc87d2f29b6a281791f35168a10723dc25d0"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.787348 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" event={"ID":"94dc77e6-c491-4bda-a95f-6ab4892d06db","Type":"ContainerStarted","Data":"57c565296df128fe5a9fb751057f880a4b256984708518da4483e061bb55168c"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.795219 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.796066 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.802103 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4gqn8" event={"ID":"edaae5aa-0654-4349-9473-907e90886e59","Type":"ContainerStarted","Data":"4a8b4d5a294512e30364942e6abc27dd7570cfbdf05aa84802514de1bdefb9ba"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.806233 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" event={"ID":"69db4421-c7a4-42f0-9138-e132dda1bd51","Type":"ContainerStarted","Data":"14a881dea6fa335b287de4beeb612d7676091a93c5d4c80ba2d8e76de376a509"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.806269 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" event={"ID":"69db4421-c7a4-42f0-9138-e132dda1bd51","Type":"ContainerStarted","Data":"796ea6593fc41bd8467ef659edb722c12f97e30a01b0859efc30ee22aaf580fc"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.808644 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.809116 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" event={"ID":"45958d91-5d71-4ecc-9174-75d0d4e22f5d","Type":"ContainerStarted","Data":"a2a4b4d73435927be82466d015f5857356bb59d1dad88c1018a0b720cf7ab9e7"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.809158 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" event={"ID":"45958d91-5d71-4ecc-9174-75d0d4e22f5d","Type":"ContainerStarted","Data":"ee7f2dfb8e0db236d0eace1741a442a3eb849915c4e5c2873ac6d918a80ce641"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.811397 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" event={"ID":"c315c604-594d-4069-823c-9859b87e22c7","Type":"ContainerStarted","Data":"9b606b50b167115c02e7749e6585404439acdd70d0bb43134cb779824d818776"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.811433 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" event={"ID":"c315c604-594d-4069-823c-9859b87e22c7","Type":"ContainerStarted","Data":"02f69a032ba40a2d9f550d4853367e5301108fe8c2e085904ad803ceb66c2947"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.817159 4875 patch_prober.go:28] interesting pod/router-default-5444994796-5v2bh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:58:56 crc kubenswrapper[4875]: [-]has-synced failed: reason withheld Jan 30 16:58:56 crc kubenswrapper[4875]: [+]process-running ok Jan 30 16:58:56 crc kubenswrapper[4875]: healthz check failed Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.817207 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5v2bh" podUID="0d15a27f-97a8-4c8e-8450-5266afa2d382" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.817682 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mfrmm" podStartSLOduration=7.817666683 podStartE2EDuration="7.817666683s" podCreationTimestamp="2026-01-30 16:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:56.817281981 +0000 UTC m=+147.364645354" watchObservedRunningTime="2026-01-30 16:58:56.817666683 +0000 UTC m=+147.365030066" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.823863 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pwgd8" event={"ID":"53ad913d-a076-4972-93ae-1271d4c2ab76","Type":"ContainerStarted","Data":"8501ab1c776db5ceb0f3190041c076e8fd89069825408bf71fccdd2b6833248c"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.823919 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pwgd8" event={"ID":"53ad913d-a076-4972-93ae-1271d4c2ab76","Type":"ContainerStarted","Data":"c2711bb7803c8eec2a329acf0b56d3fff4e86a25b02478c585321b88d9db246b"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.823933 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pwgd8" event={"ID":"53ad913d-a076-4972-93ae-1271d4c2ab76","Type":"ContainerStarted","Data":"de68c1ab6c05f1de0466ac0c3fd848fc047e33ffee6de410f5f242854d6457e7"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.830776 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:56 crc kubenswrapper[4875]: E0130 16:58:56.831514 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.331487961 +0000 UTC m=+147.878851374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.833691 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" event={"ID":"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac","Type":"ContainerStarted","Data":"df78a04b4e53f2e342dd9466b4fbd2dc582baebca99a1a9caae86f9267a13106"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.833762 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" event={"ID":"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac","Type":"ContainerStarted","Data":"28b48486629ca2775a3f244461b05cfc607a44211ef9e02509cdd85130b8989d"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.833772 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" event={"ID":"0dce9182-7f6f-48d8-a9bf-096fd7ca43ac","Type":"ContainerStarted","Data":"2f1c87f75e7842ee2fd1cd97ec3b8c660adb78b76d46836fbafa4a627907fdd0"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.846638 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ht6ll" event={"ID":"ac862908-f2bf-42a2-b453-12f722f2cae3","Type":"ContainerStarted","Data":"f8448f958522ec4a0704f30f18f99b4bcee0e9d4525a6c9afab057a096007274"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.846693 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ht6ll" event={"ID":"ac862908-f2bf-42a2-b453-12f722f2cae3","Type":"ContainerStarted","Data":"5c0f28e64f851dcc1a842a93e278652a1f8e010637b9fdc0dd4b1ecdb4f848b9"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.854246 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5v28g" event={"ID":"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83","Type":"ContainerStarted","Data":"331118e6dc732291315b31f9911ab865c8c13437539b57c5a2b740d660d819cd"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.859931 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-8sjsp" event={"ID":"1d8dcd63-7b87-47d3-84b8-3986857a6bc8","Type":"ContainerStarted","Data":"e80b64488b225fc05dbdc79b3b4c807ea8124a2a2db97493ea71be161b538da8"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.864174 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nsbwm" event={"ID":"fd902c0b-6664-425d-ad65-dd2069a17fae","Type":"ContainerStarted","Data":"1d148a4f5714836e87327e5e3439e1260a8283d0e835283cf91ed7d2c31e5650"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.865101 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2njb9" event={"ID":"0ce1959e-9d34-4221-8ede-5ec652b44b0d","Type":"ContainerStarted","Data":"41852d1bf59737acd2541e7ef9c26b3f0240c6ee262e2ed5c3a9d7e996807b64"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.876343 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" podStartSLOduration=126.87632102 podStartE2EDuration="2m6.87632102s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:56.846494676 +0000 UTC m=+147.393858059" watchObservedRunningTime="2026-01-30 16:58:56.87632102 +0000 UTC m=+147.423684403" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.877850 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" event={"ID":"b7997c32-6e00-4402-acfb-d3bf63227f0b","Type":"ContainerStarted","Data":"bbea800ca3afb53fb7358387b5842dcc1ed7adc82499a6c4a96556412ac93fce"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.877887 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" event={"ID":"b7997c32-6e00-4402-acfb-d3bf63227f0b","Type":"ContainerStarted","Data":"c437ed43c50f6ce5246129571c77d5f22cb004af5426036dee64736960091b82"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.877898 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" event={"ID":"b7997c32-6e00-4402-acfb-d3bf63227f0b","Type":"ContainerStarted","Data":"794a86a64e9f5f5ce73c827562c9037871ea62996ebd9ce3b84e2b2bc3383cf4"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.879801 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" event={"ID":"d950d064-e8ae-47c8-adb8-cb60ba5bd5b9","Type":"ContainerStarted","Data":"829b0c9564f7e25119640564fd981d50c2fa34e9f62d30f13a3219dba618c283"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.879842 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" event={"ID":"d950d064-e8ae-47c8-adb8-cb60ba5bd5b9","Type":"ContainerStarted","Data":"bb2a740263bd2b455ddfdc7aa6b5a0efc1b823a27089e40c799b968c0eb7a914"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.904937 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qc97s" event={"ID":"69e24be4-7935-43ce-9815-ed1fa40e9933","Type":"ContainerStarted","Data":"d25c04888c4adecf7ac60fa65081fb7b90fedfb74aaebf748b0f0411d9ea7790"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.904978 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qc97s" event={"ID":"69e24be4-7935-43ce-9815-ed1fa40e9933","Type":"ContainerStarted","Data":"84ee4aa5276c6b95a76ec04999ff2e6197e8e61086f0a634dc6aef69f5f1d34a"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.905468 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-qc97s" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.912512 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-pgmbb" podStartSLOduration=126.912496031 podStartE2EDuration="2m6.912496031s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:56.875733821 +0000 UTC m=+147.423097224" watchObservedRunningTime="2026-01-30 16:58:56.912496031 +0000 UTC m=+147.459859414" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.913573 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ztrcm" podStartSLOduration=126.913568964 podStartE2EDuration="2m6.913568964s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:56.912702977 +0000 UTC m=+147.460066370" watchObservedRunningTime="2026-01-30 16:58:56.913568964 +0000 UTC m=+147.460932347" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.913730 4875 patch_prober.go:28] interesting pod/downloads-7954f5f757-qc97s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.913788 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qc97s" podUID="69e24be4-7935-43ce-9815-ed1fa40e9933" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.38:8080/\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.922812 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" event={"ID":"be8257d7-3aa4-406a-9f47-bda46f688e32","Type":"ContainerStarted","Data":"735fb83ae494bc38b9ef186cda1980d576a526a7625d7197eb574e9435042dc0"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.923100 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" event={"ID":"be8257d7-3aa4-406a-9f47-bda46f688e32","Type":"ContainerStarted","Data":"dab15f513739102789f535b3f039eab21e2b8a51611abce741db17ece285b333"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.932434 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:56 crc kubenswrapper[4875]: E0130 16:58:56.932988 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.432967235 +0000 UTC m=+147.980330688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.936708 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-stgmg" event={"ID":"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28","Type":"ContainerStarted","Data":"6b0eb1cb6e25510ea61b60475809cd4009b6334ffdef7964656cb38915cc999a"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.936752 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-stgmg" event={"ID":"8ef68fb4-d9c6-484f-a05e-a8e5d3460a28","Type":"ContainerStarted","Data":"50333ea2c5ed7b5ec34f0468cdfbc90b29c478ecf24ad8a47746a05eccbcb8cd"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.937041 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.942352 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-scxjx" event={"ID":"8555787c-19c9-49c3-8b1a-7261cb693b97","Type":"ContainerStarted","Data":"f2f85e8bec75da25b236caaf1245cd6cdb5c5b5d0c51d047343bf511d08139f4"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.944124 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-r4zq5" podStartSLOduration=126.94410772 podStartE2EDuration="2m6.94410772s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:56.943593304 +0000 UTC m=+147.490956687" watchObservedRunningTime="2026-01-30 16:58:56.94410772 +0000 UTC m=+147.491471103" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.961737 4875 patch_prober.go:28] interesting pod/console-operator-58897d9998-stgmg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.961795 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-stgmg" podUID="8ef68fb4-d9c6-484f-a05e-a8e5d3460a28" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.965498 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jrtj9" podStartSLOduration=126.965479852 podStartE2EDuration="2m6.965479852s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:56.964851192 +0000 UTC m=+147.512214575" watchObservedRunningTime="2026-01-30 16:58:56.965479852 +0000 UTC m=+147.512843235" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.967014 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" event={"ID":"35f19686-9d5d-470f-8431-24ba28e8237e","Type":"ContainerStarted","Data":"63ca28c28733c898b493744069ee3bbe2c95b70e83dfb8b550bb633fd49c30a8"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.967054 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" event={"ID":"35f19686-9d5d-470f-8431-24ba28e8237e","Type":"ContainerStarted","Data":"34e49611f3570c276f50734405e31bb68cdb52b988e81c5a6598f922177ce6dc"} Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.967995 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.970610 4875 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7d4p5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 30 16:58:56 crc kubenswrapper[4875]: I0130 16:58:56.970650 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" podUID="35f19686-9d5d-470f-8431-24ba28e8237e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.004203 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" event={"ID":"80518ae7-5ae1-40f4-8551-c97d8dfe4433","Type":"ContainerStarted","Data":"cfad278511bab1103f7391beccfe596b466dcca8ec96cd5ce362592fdf956c9a"} Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.004240 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" event={"ID":"80518ae7-5ae1-40f4-8551-c97d8dfe4433","Type":"ContainerStarted","Data":"569a51b8b5f916df899bd42a0dac137809455d8848658d6468d529cc9d2d0dc1"} Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.004250 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" event={"ID":"80518ae7-5ae1-40f4-8551-c97d8dfe4433","Type":"ContainerStarted","Data":"bb3bc8d643b45da420518832a3dbfe4c3943254dd121292b66ff07bfa1825e3f"} Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.004264 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.012512 4875 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6hpsd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.012553 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" podUID="beaaba45-df33-4540-ab78-79f1dc92f87b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.033909 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:57 crc kubenswrapper[4875]: E0130 16:58:57.034980 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.534965605 +0000 UTC m=+148.082328988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.036736 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pwgd8" podStartSLOduration=127.03672157 podStartE2EDuration="2m7.03672157s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:56.997856936 +0000 UTC m=+147.545220309" watchObservedRunningTime="2026-01-30 16:58:57.03672157 +0000 UTC m=+147.584084973" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.043748 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-flhcf" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.065247 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9z79s" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.066015 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" podStartSLOduration=127.066000426 podStartE2EDuration="2m7.066000426s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:57.065558432 +0000 UTC m=+147.612921815" watchObservedRunningTime="2026-01-30 16:58:57.066000426 +0000 UTC m=+147.613363809" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.159116 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.160826 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.161621 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ht6ll" podStartSLOduration=127.161598518 podStartE2EDuration="2m7.161598518s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:57.15938048 +0000 UTC m=+147.706743853" watchObservedRunningTime="2026-01-30 16:58:57.161598518 +0000 UTC m=+147.708961911" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.161747 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nmcnv" podStartSLOduration=127.161742383 podStartE2EDuration="2m7.161742383s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:57.086875434 +0000 UTC m=+147.634238817" watchObservedRunningTime="2026-01-30 16:58:57.161742383 +0000 UTC m=+147.709105766" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.167261 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 16:53:56 +0000 UTC, rotation deadline is 2026-12-14 22:00:32.861255712 +0000 UTC Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.167327 4875 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7637h1m35.693931376s for next certificate rotation Jan 30 16:58:57 crc kubenswrapper[4875]: E0130 16:58:57.169263 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.669243945 +0000 UTC m=+148.216607328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.209329 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-stgmg" podStartSLOduration=127.209313957 podStartE2EDuration="2m7.209313957s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:57.208050168 +0000 UTC m=+147.755413551" watchObservedRunningTime="2026-01-30 16:58:57.209313957 +0000 UTC m=+147.756677340" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.249154 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mjwmm" podStartSLOduration=127.2491276 podStartE2EDuration="2m7.2491276s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:57.24072439 +0000 UTC m=+147.788087773" watchObservedRunningTime="2026-01-30 16:58:57.2491276 +0000 UTC m=+147.796491023" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.267153 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:57 crc kubenswrapper[4875]: E0130 16:58:57.267448 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.767427887 +0000 UTC m=+148.314791280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.267503 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:57 crc kubenswrapper[4875]: E0130 16:58:57.267857 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.76784784 +0000 UTC m=+148.315211223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.299736 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dsrht" podStartSLOduration=127.299719968 podStartE2EDuration="2m7.299719968s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:57.27302216 +0000 UTC m=+147.820385553" watchObservedRunningTime="2026-01-30 16:58:57.299719968 +0000 UTC m=+147.847083351" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.301466 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" podStartSLOduration=127.301460662 podStartE2EDuration="2m7.301460662s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:57.298992225 +0000 UTC m=+147.846355608" watchObservedRunningTime="2026-01-30 16:58:57.301460662 +0000 UTC m=+147.848824045" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.320975 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-qc97s" podStartSLOduration=127.320960106 podStartE2EDuration="2m7.320960106s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:57.318542501 +0000 UTC m=+147.865905884" watchObservedRunningTime="2026-01-30 16:58:57.320960106 +0000 UTC m=+147.868323489" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.372061 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:57 crc kubenswrapper[4875]: E0130 16:58:57.372313 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.872297777 +0000 UTC m=+148.419661160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.473095 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:57 crc kubenswrapper[4875]: E0130 16:58:57.473445 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:57.97343263 +0000 UTC m=+148.520796013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.573662 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:57 crc kubenswrapper[4875]: E0130 16:58:57.574202 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:58.074186502 +0000 UTC m=+148.621549885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.675911 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:57 crc kubenswrapper[4875]: E0130 16:58:57.676259 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:58.176247803 +0000 UTC m=+148.723611186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.729825 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-69xn8" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.776719 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:57 crc kubenswrapper[4875]: E0130 16:58:57.777026 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:58.277009265 +0000 UTC m=+148.824372648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.813950 4875 patch_prober.go:28] interesting pod/router-default-5444994796-5v2bh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:58:57 crc kubenswrapper[4875]: [-]has-synced failed: reason withheld Jan 30 16:58:57 crc kubenswrapper[4875]: [+]process-running ok Jan 30 16:58:57 crc kubenswrapper[4875]: healthz check failed Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.814022 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5v2bh" podUID="0d15a27f-97a8-4c8e-8450-5266afa2d382" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.877610 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.877686 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:57 crc kubenswrapper[4875]: E0130 16:58:57.877934 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:58.377911771 +0000 UTC m=+148.925275154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.878537 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.978266 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:57 crc kubenswrapper[4875]: E0130 16:58:57.978552 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:58.478520948 +0000 UTC m=+149.025884331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.978638 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.978688 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.978730 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.978827 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:57 crc kubenswrapper[4875]: E0130 16:58:57.978986 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:58.478972863 +0000 UTC m=+149.026336246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.987293 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:57 crc kubenswrapper[4875]: I0130 16:58:57.988037 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:57.999658 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.010933 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h4ql7" event={"ID":"16d079a0-8b15-4afe-b80b-29edde7f9251","Type":"ContainerStarted","Data":"f131c1677db8e7fd95633bf6a3d5f323cac4bda8bb5a0c3bce6cdb662b6c8c8c"} Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.012898 4875 generic.go:334] "Generic (PLEG): container finished" podID="94dc77e6-c491-4bda-a95f-6ab4892d06db" containerID="0020f39d9d126bbe926efda0d8e2cc87d2f29b6a281791f35168a10723dc25d0" exitCode=0 Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.012948 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" event={"ID":"94dc77e6-c491-4bda-a95f-6ab4892d06db","Type":"ContainerDied","Data":"0020f39d9d126bbe926efda0d8e2cc87d2f29b6a281791f35168a10723dc25d0"} Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.014217 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5v28g" event={"ID":"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83","Type":"ContainerStarted","Data":"8571725c3a52c75e8684cd183a3652f8be23ad82515c0fb37dec11318cf124ad"} Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.016347 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" event={"ID":"38f0d965-f1ec-4d01-9155-d3740a9ce78f","Type":"ContainerStarted","Data":"de64756a48cc321a505188230e555053ced7690a5ee3a9b5455d6c9a419d5962"} Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.016413 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" event={"ID":"38f0d965-f1ec-4d01-9155-d3740a9ce78f","Type":"ContainerStarted","Data":"c8a29c990f348efbd8917a65df85a6b9d1c631fb71a3a783849aad02cdd9896b"} Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.017094 4875 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6hpsd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" start-of-body= Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.017122 4875 patch_prober.go:28] interesting pod/console-operator-58897d9998-stgmg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.017132 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" podUID="beaaba45-df33-4540-ab78-79f1dc92f87b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": dial tcp 10.217.0.22:8080: connect: connection refused" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.017163 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-stgmg" podUID="8ef68fb4-d9c6-484f-a05e-a8e5d3460a28" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.017561 4875 patch_prober.go:28] interesting pod/downloads-7954f5f757-qc97s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.017607 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qc97s" podUID="69e24be4-7935-43ce-9815-ed1fa40e9933" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.38:8080/\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.022068 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7d4p5" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.034742 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-h4ql7" podStartSLOduration=128.03472587 podStartE2EDuration="2m8.03472587s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:58.034378179 +0000 UTC m=+148.581741562" watchObservedRunningTime="2026-01-30 16:58:58.03472587 +0000 UTC m=+148.582089253" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.080723 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:58 crc kubenswrapper[4875]: E0130 16:58:58.080952 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:58.580919482 +0000 UTC m=+149.128282865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.083139 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:58 crc kubenswrapper[4875]: E0130 16:58:58.086209 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:58.586194065 +0000 UTC m=+149.133557448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.096003 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.096981 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.105249 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.105496 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.117324 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.138086 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" podStartSLOduration=128.138069162 podStartE2EDuration="2m8.138069162s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:58.137569817 +0000 UTC m=+148.684933200" watchObservedRunningTime="2026-01-30 16:58:58.138069162 +0000 UTC m=+148.685432545" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.154763 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.163896 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.171543 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.188575 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.188951 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e78733b2-73be-4247-825e-b047dbedcdd4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e78733b2-73be-4247-825e-b047dbedcdd4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.189071 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e78733b2-73be-4247-825e-b047dbedcdd4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e78733b2-73be-4247-825e-b047dbedcdd4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:58:58 crc kubenswrapper[4875]: E0130 16:58:58.189189 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:58.689173085 +0000 UTC m=+149.236536468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.290735 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.290779 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e78733b2-73be-4247-825e-b047dbedcdd4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e78733b2-73be-4247-825e-b047dbedcdd4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.290813 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e78733b2-73be-4247-825e-b047dbedcdd4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e78733b2-73be-4247-825e-b047dbedcdd4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.290909 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e78733b2-73be-4247-825e-b047dbedcdd4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e78733b2-73be-4247-825e-b047dbedcdd4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:58:58 crc kubenswrapper[4875]: E0130 16:58:58.291151 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:58.791137545 +0000 UTC m=+149.338500928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.317372 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e78733b2-73be-4247-825e-b047dbedcdd4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e78733b2-73be-4247-825e-b047dbedcdd4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.392467 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:58 crc kubenswrapper[4875]: E0130 16:58:58.392769 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:58.892728722 +0000 UTC m=+149.440092105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.426335 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.501353 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:58 crc kubenswrapper[4875]: E0130 16:58:58.501871 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:59.001853833 +0000 UTC m=+149.549217216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.526933 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pdr7w"] Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.534445 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.543065 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.556548 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdr7w"] Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.602363 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.602527 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-catalog-content\") pod \"certified-operators-pdr7w\" (UID: \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\") " pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.602553 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dqx5\" (UniqueName: \"kubernetes.io/projected/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-kube-api-access-9dqx5\") pod \"certified-operators-pdr7w\" (UID: \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\") " pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.602622 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-utilities\") pod \"certified-operators-pdr7w\" (UID: \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\") " pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:58:58 crc kubenswrapper[4875]: E0130 16:58:58.602716 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:59.102701437 +0000 UTC m=+149.650064820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.703547 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-catalog-content\") pod \"certified-operators-pdr7w\" (UID: \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\") " pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.703620 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dqx5\" (UniqueName: \"kubernetes.io/projected/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-kube-api-access-9dqx5\") pod \"certified-operators-pdr7w\" (UID: \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\") " pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.703708 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-utilities\") pod \"certified-operators-pdr7w\" (UID: \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\") " pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.703748 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:58 crc kubenswrapper[4875]: E0130 16:58:58.704103 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:59.204091199 +0000 UTC m=+149.751454582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.704236 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-catalog-content\") pod \"certified-operators-pdr7w\" (UID: \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\") " pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.704490 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-utilities\") pod \"certified-operators-pdr7w\" (UID: \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\") " pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.732267 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sd4tv"] Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.737233 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.742574 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dqx5\" (UniqueName: \"kubernetes.io/projected/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-kube-api-access-9dqx5\") pod \"certified-operators-pdr7w\" (UID: \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\") " pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.743470 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.805893 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.806144 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87c78ecd-3fa5-40a9-ac0d-25449555b524-utilities\") pod \"community-operators-sd4tv\" (UID: \"87c78ecd-3fa5-40a9-ac0d-25449555b524\") " pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:58:58 crc kubenswrapper[4875]: E0130 16:58:58.806182 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:59.306158161 +0000 UTC m=+149.853521544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.806234 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87c78ecd-3fa5-40a9-ac0d-25449555b524-catalog-content\") pod \"community-operators-sd4tv\" (UID: \"87c78ecd-3fa5-40a9-ac0d-25449555b524\") " pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.806278 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhv4j\" (UniqueName: \"kubernetes.io/projected/87c78ecd-3fa5-40a9-ac0d-25449555b524-kube-api-access-xhv4j\") pod \"community-operators-sd4tv\" (UID: \"87c78ecd-3fa5-40a9-ac0d-25449555b524\") " pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.822037 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sd4tv"] Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.823196 4875 patch_prober.go:28] interesting pod/router-default-5444994796-5v2bh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:58:58 crc kubenswrapper[4875]: [-]has-synced failed: reason withheld Jan 30 16:58:58 crc kubenswrapper[4875]: [+]process-running ok Jan 30 16:58:58 crc kubenswrapper[4875]: healthz check failed Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.823470 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5v2bh" podUID="0d15a27f-97a8-4c8e-8450-5266afa2d382" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.885658 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.907111 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.907445 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87c78ecd-3fa5-40a9-ac0d-25449555b524-utilities\") pod \"community-operators-sd4tv\" (UID: \"87c78ecd-3fa5-40a9-ac0d-25449555b524\") " pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.907473 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87c78ecd-3fa5-40a9-ac0d-25449555b524-catalog-content\") pod \"community-operators-sd4tv\" (UID: \"87c78ecd-3fa5-40a9-ac0d-25449555b524\") " pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.907510 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhv4j\" (UniqueName: \"kubernetes.io/projected/87c78ecd-3fa5-40a9-ac0d-25449555b524-kube-api-access-xhv4j\") pod \"community-operators-sd4tv\" (UID: \"87c78ecd-3fa5-40a9-ac0d-25449555b524\") " pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:58:58 crc kubenswrapper[4875]: E0130 16:58:58.908538 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:59.408524983 +0000 UTC m=+149.955888376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.912531 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87c78ecd-3fa5-40a9-ac0d-25449555b524-catalog-content\") pod \"community-operators-sd4tv\" (UID: \"87c78ecd-3fa5-40a9-ac0d-25449555b524\") " pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.912610 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87c78ecd-3fa5-40a9-ac0d-25449555b524-utilities\") pod \"community-operators-sd4tv\" (UID: \"87c78ecd-3fa5-40a9-ac0d-25449555b524\") " pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.943406 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tz4fm"] Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.947156 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhv4j\" (UniqueName: \"kubernetes.io/projected/87c78ecd-3fa5-40a9-ac0d-25449555b524-kube-api-access-xhv4j\") pod \"community-operators-sd4tv\" (UID: \"87c78ecd-3fa5-40a9-ac0d-25449555b524\") " pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.959900 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:58:58 crc kubenswrapper[4875]: I0130 16:58:58.976683 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tz4fm"] Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.021046 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:59 crc kubenswrapper[4875]: E0130 16:58:59.021342 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:59.521326458 +0000 UTC m=+150.068689841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.062819 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1e956422491074fb73983a41db0b24930b54c0ee63c24ddf22a610d14b7112ff"} Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.071286 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5v28g" event={"ID":"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83","Type":"ContainerStarted","Data":"a24b76c2db8b9cd95aafae393b8786f1461b1e3933d8d6d28866325e26469630"} Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.073350 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ca1035b9019e4827a9c8310ed1d6d7594d55e41f5ec0116dc6b1441cbfaecf38"} Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.084711 4875 patch_prober.go:28] interesting pod/downloads-7954f5f757-qc97s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.084767 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qc97s" podUID="69e24be4-7935-43ce-9815-ed1fa40e9933" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.38:8080/\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.086000 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.121315 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.122100 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228882df-4f66-4157-836b-f95a581fe216-catalog-content\") pod \"certified-operators-tz4fm\" (UID: \"228882df-4f66-4157-836b-f95a581fe216\") " pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.122161 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228882df-4f66-4157-836b-f95a581fe216-utilities\") pod \"certified-operators-tz4fm\" (UID: \"228882df-4f66-4157-836b-f95a581fe216\") " pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.122189 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.122213 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smvmm\" (UniqueName: \"kubernetes.io/projected/228882df-4f66-4157-836b-f95a581fe216-kube-api-access-smvmm\") pod \"certified-operators-tz4fm\" (UID: \"228882df-4f66-4157-836b-f95a581fe216\") " pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.122464 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vwt6q"] Jan 30 16:58:59 crc kubenswrapper[4875]: E0130 16:58:59.122558 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:59.622544964 +0000 UTC m=+150.169908347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.124631 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.131804 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vwt6q"] Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.224105 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.224602 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228882df-4f66-4157-836b-f95a581fe216-catalog-content\") pod \"certified-operators-tz4fm\" (UID: \"228882df-4f66-4157-836b-f95a581fe216\") " pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.224688 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6891de92-f1af-4dcc-bc97-c2a2a647515b-catalog-content\") pod \"community-operators-vwt6q\" (UID: \"6891de92-f1af-4dcc-bc97-c2a2a647515b\") " pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.224765 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgpks\" (UniqueName: \"kubernetes.io/projected/6891de92-f1af-4dcc-bc97-c2a2a647515b-kube-api-access-zgpks\") pod \"community-operators-vwt6q\" (UID: \"6891de92-f1af-4dcc-bc97-c2a2a647515b\") " pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.224806 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228882df-4f66-4157-836b-f95a581fe216-utilities\") pod \"certified-operators-tz4fm\" (UID: \"228882df-4f66-4157-836b-f95a581fe216\") " pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.224889 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smvmm\" (UniqueName: \"kubernetes.io/projected/228882df-4f66-4157-836b-f95a581fe216-kube-api-access-smvmm\") pod \"certified-operators-tz4fm\" (UID: \"228882df-4f66-4157-836b-f95a581fe216\") " pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.224970 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6891de92-f1af-4dcc-bc97-c2a2a647515b-utilities\") pod \"community-operators-vwt6q\" (UID: \"6891de92-f1af-4dcc-bc97-c2a2a647515b\") " pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:58:59 crc kubenswrapper[4875]: E0130 16:58:59.225104 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:59.7250866 +0000 UTC m=+150.272449983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.227348 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228882df-4f66-4157-836b-f95a581fe216-catalog-content\") pod \"certified-operators-tz4fm\" (UID: \"228882df-4f66-4157-836b-f95a581fe216\") " pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.227765 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228882df-4f66-4157-836b-f95a581fe216-utilities\") pod \"certified-operators-tz4fm\" (UID: \"228882df-4f66-4157-836b-f95a581fe216\") " pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.261321 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smvmm\" (UniqueName: \"kubernetes.io/projected/228882df-4f66-4157-836b-f95a581fe216-kube-api-access-smvmm\") pod \"certified-operators-tz4fm\" (UID: \"228882df-4f66-4157-836b-f95a581fe216\") " pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.321916 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.330544 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6891de92-f1af-4dcc-bc97-c2a2a647515b-catalog-content\") pod \"community-operators-vwt6q\" (UID: \"6891de92-f1af-4dcc-bc97-c2a2a647515b\") " pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.336976 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6891de92-f1af-4dcc-bc97-c2a2a647515b-catalog-content\") pod \"community-operators-vwt6q\" (UID: \"6891de92-f1af-4dcc-bc97-c2a2a647515b\") " pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.343743 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgpks\" (UniqueName: \"kubernetes.io/projected/6891de92-f1af-4dcc-bc97-c2a2a647515b-kube-api-access-zgpks\") pod \"community-operators-vwt6q\" (UID: \"6891de92-f1af-4dcc-bc97-c2a2a647515b\") " pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.343892 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.343991 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6891de92-f1af-4dcc-bc97-c2a2a647515b-utilities\") pod \"community-operators-vwt6q\" (UID: \"6891de92-f1af-4dcc-bc97-c2a2a647515b\") " pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.344528 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6891de92-f1af-4dcc-bc97-c2a2a647515b-utilities\") pod \"community-operators-vwt6q\" (UID: \"6891de92-f1af-4dcc-bc97-c2a2a647515b\") " pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:58:59 crc kubenswrapper[4875]: E0130 16:58:59.345205 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:58:59.845182801 +0000 UTC m=+150.392546184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.383605 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdr7w"] Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.394812 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgpks\" (UniqueName: \"kubernetes.io/projected/6891de92-f1af-4dcc-bc97-c2a2a647515b-kube-api-access-zgpks\") pod \"community-operators-vwt6q\" (UID: \"6891de92-f1af-4dcc-bc97-c2a2a647515b\") " pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.447703 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:59 crc kubenswrapper[4875]: E0130 16:58:59.448256 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:58:59.948237654 +0000 UTC m=+150.495601037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.473857 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.505455 4875 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.549295 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:59 crc kubenswrapper[4875]: E0130 16:58:59.549654 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:59:00.049642766 +0000 UTC m=+150.597006149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.620662 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.651736 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt2w5\" (UniqueName: \"kubernetes.io/projected/94dc77e6-c491-4bda-a95f-6ab4892d06db-kube-api-access-zt2w5\") pod \"94dc77e6-c491-4bda-a95f-6ab4892d06db\" (UID: \"94dc77e6-c491-4bda-a95f-6ab4892d06db\") " Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.652083 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.652110 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94dc77e6-c491-4bda-a95f-6ab4892d06db-secret-volume\") pod \"94dc77e6-c491-4bda-a95f-6ab4892d06db\" (UID: \"94dc77e6-c491-4bda-a95f-6ab4892d06db\") " Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.652143 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94dc77e6-c491-4bda-a95f-6ab4892d06db-config-volume\") pod \"94dc77e6-c491-4bda-a95f-6ab4892d06db\" (UID: \"94dc77e6-c491-4bda-a95f-6ab4892d06db\") " Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.652762 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94dc77e6-c491-4bda-a95f-6ab4892d06db-config-volume" (OuterVolumeSpecName: "config-volume") pod "94dc77e6-c491-4bda-a95f-6ab4892d06db" (UID: "94dc77e6-c491-4bda-a95f-6ab4892d06db"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:58:59 crc kubenswrapper[4875]: E0130 16:58:59.652841 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:59:00.152815192 +0000 UTC m=+150.700178575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.659364 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94dc77e6-c491-4bda-a95f-6ab4892d06db-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "94dc77e6-c491-4bda-a95f-6ab4892d06db" (UID: "94dc77e6-c491-4bda-a95f-6ab4892d06db"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.662347 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94dc77e6-c491-4bda-a95f-6ab4892d06db-kube-api-access-zt2w5" (OuterVolumeSpecName: "kube-api-access-zt2w5") pod "94dc77e6-c491-4bda-a95f-6ab4892d06db" (UID: "94dc77e6-c491-4bda-a95f-6ab4892d06db"). InnerVolumeSpecName "kube-api-access-zt2w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.755231 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.755345 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt2w5\" (UniqueName: \"kubernetes.io/projected/94dc77e6-c491-4bda-a95f-6ab4892d06db-kube-api-access-zt2w5\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.755374 4875 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94dc77e6-c491-4bda-a95f-6ab4892d06db-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.755389 4875 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94dc77e6-c491-4bda-a95f-6ab4892d06db-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:59 crc kubenswrapper[4875]: E0130 16:58:59.755565 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:59:00.255550406 +0000 UTC m=+150.802913789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vcs72" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.759982 4875 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T16:58:59.505666904Z","Handler":null,"Name":""} Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.773435 4875 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.773483 4875 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.819081 4875 patch_prober.go:28] interesting pod/router-default-5444994796-5v2bh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:58:59 crc kubenswrapper[4875]: [-]has-synced failed: reason withheld Jan 30 16:58:59 crc kubenswrapper[4875]: [+]process-running ok Jan 30 16:58:59 crc kubenswrapper[4875]: healthz check failed Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.819135 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5v2bh" podUID="0d15a27f-97a8-4c8e-8450-5266afa2d382" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.856930 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.867179 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tz4fm"] Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.867371 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:58:59 crc kubenswrapper[4875]: W0130 16:58:59.877486 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod228882df_4f66_4157_836b_f95a581fe216.slice/crio-d514d6c9d251927e1cff073d6d5ff1a72f7f337505999f33b0f744ca86235ca1 WatchSource:0}: Error finding container d514d6c9d251927e1cff073d6d5ff1a72f7f337505999f33b0f744ca86235ca1: Status 404 returned error can't find the container with id d514d6c9d251927e1cff073d6d5ff1a72f7f337505999f33b0f744ca86235ca1 Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.949316 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sd4tv"] Jan 30 16:58:59 crc kubenswrapper[4875]: W0130 16:58:59.953168 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87c78ecd_3fa5_40a9_ac0d_25449555b524.slice/crio-2ebbca42502f007f99e04dcc1ffa70cc4f61d768b17d9156e329cd4671c303c2 WatchSource:0}: Error finding container 2ebbca42502f007f99e04dcc1ffa70cc4f61d768b17d9156e329cd4671c303c2: Status 404 returned error can't find the container with id 2ebbca42502f007f99e04dcc1ffa70cc4f61d768b17d9156e329cd4671c303c2 Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.959447 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.962315 4875 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:58:59 crc kubenswrapper[4875]: I0130 16:58:59.962384 4875 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.017636 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vcs72\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.035508 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vwt6q"] Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.080166 4875 generic.go:334] "Generic (PLEG): container finished" podID="0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" containerID="57db538bdc726299dffc198cf067ddab9d7ee689b969dbb874338f545d9996c5" exitCode=0 Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.080220 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdr7w" event={"ID":"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79","Type":"ContainerDied","Data":"57db538bdc726299dffc198cf067ddab9d7ee689b969dbb874338f545d9996c5"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.080250 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdr7w" event={"ID":"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79","Type":"ContainerStarted","Data":"d3a5a55784bbcb151c45b081c39754b8d00d9ea7792f52b7c12140ebac49a90c"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.092192 4875 generic.go:334] "Generic (PLEG): container finished" podID="228882df-4f66-4157-836b-f95a581fe216" containerID="09683f15df5d56b44e58c50fbe203960c0e8c33021dec1b7ba00aa111b8bfd70" exitCode=0 Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.092336 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz4fm" event={"ID":"228882df-4f66-4157-836b-f95a581fe216","Type":"ContainerDied","Data":"09683f15df5d56b44e58c50fbe203960c0e8c33021dec1b7ba00aa111b8bfd70"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.092369 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz4fm" event={"ID":"228882df-4f66-4157-836b-f95a581fe216","Type":"ContainerStarted","Data":"d514d6c9d251927e1cff073d6d5ff1a72f7f337505999f33b0f744ca86235ca1"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.097050 4875 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.098536 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ce6e37071dfabbed7ef2ab40edae588e0a69e22cae033882762643c34f9631dd"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.104137 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"66fc5f1d5c138700b76ded903f5bb47955395895a0892f23b06bb19c6957f541"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.104188 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c208c9c734b49e8b7185c2461aa54236fe66059247f0e59479c36593ce1eb4cc"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.119778 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" event={"ID":"94dc77e6-c491-4bda-a95f-6ab4892d06db","Type":"ContainerDied","Data":"57c565296df128fe5a9fb751057f880a4b256984708518da4483e061bb55168c"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.119814 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57c565296df128fe5a9fb751057f880a4b256984708518da4483e061bb55168c" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.119881 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.130749 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"30cd07b3836177ea5d62891267c49ba824ac7407aa0707d09f3ccb160c8c80e3"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.130841 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.145102 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.145751 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5v28g" event={"ID":"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83","Type":"ContainerStarted","Data":"cdd4f7751551f646d5f2d8c8c0ba5d819841053911cf1548a47c49584695fe1a"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.145777 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5v28g" event={"ID":"9cad3a5b-885b-4b9c-bdaf-e8adfbfeab83","Type":"ContainerStarted","Data":"16a6e6d74c6fadc165a32c4e98900c392d6b819b8f522f0afb54c24c37a6d315"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.145786 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwt6q" event={"ID":"6891de92-f1af-4dcc-bc97-c2a2a647515b","Type":"ContainerStarted","Data":"e83f7b3c0f3a4610e4bb8da5c8d533c0e0e21a1fb0eaee1f68ae2dcd08c41c06"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.145796 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd4tv" event={"ID":"87c78ecd-3fa5-40a9-ac0d-25449555b524","Type":"ContainerStarted","Data":"2ebbca42502f007f99e04dcc1ffa70cc4f61d768b17d9156e329cd4671c303c2"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.146287 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e78733b2-73be-4247-825e-b047dbedcdd4","Type":"ContainerStarted","Data":"e5952e962e71e75402ee77708f9101fc0be6efc333b71d954b5e435e03a45325"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.146334 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e78733b2-73be-4247-825e-b047dbedcdd4","Type":"ContainerStarted","Data":"b4ca5210aac1981359b3a2c37b5f3217a2d82b43ad1e571f789c415a2252e238"} Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.234895 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.249132 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.249117397 podStartE2EDuration="2.249117397s" podCreationTimestamp="2026-01-30 16:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:59:00.247476647 +0000 UTC m=+150.794840030" watchObservedRunningTime="2026-01-30 16:59:00.249117397 +0000 UTC m=+150.796480780" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.251170 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.291034 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.323142 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5v28g" podStartSLOduration=11.323126181 podStartE2EDuration="11.323126181s" podCreationTimestamp="2026-01-30 16:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:59:00.321806649 +0000 UTC m=+150.869170033" watchObservedRunningTime="2026-01-30 16:59:00.323126181 +0000 UTC m=+150.870489554" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.552372 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vcs72"] Jan 30 16:59:00 crc kubenswrapper[4875]: W0130 16:59:00.555393 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf681b0b0_d68c_44b4_816e_86756d55542c.slice/crio-c0c6e0139fc65723bc53a0f18ad9bea6d6cf90a56b6b3727432006100bdfae67 WatchSource:0}: Error finding container c0c6e0139fc65723bc53a0f18ad9bea6d6cf90a56b6b3727432006100bdfae67: Status 404 returned error can't find the container with id c0c6e0139fc65723bc53a0f18ad9bea6d6cf90a56b6b3727432006100bdfae67 Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.716478 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7544f"] Jan 30 16:59:00 crc kubenswrapper[4875]: E0130 16:59:00.716718 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94dc77e6-c491-4bda-a95f-6ab4892d06db" containerName="collect-profiles" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.716729 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="94dc77e6-c491-4bda-a95f-6ab4892d06db" containerName="collect-profiles" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.716832 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="94dc77e6-c491-4bda-a95f-6ab4892d06db" containerName="collect-profiles" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.717556 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.723428 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.739226 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7544f"] Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.787850 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/438bec48-3499-4e88-b9f1-cfb1126424ad-catalog-content\") pod \"redhat-marketplace-7544f\" (UID: \"438bec48-3499-4e88-b9f1-cfb1126424ad\") " pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.787941 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/438bec48-3499-4e88-b9f1-cfb1126424ad-utilities\") pod \"redhat-marketplace-7544f\" (UID: \"438bec48-3499-4e88-b9f1-cfb1126424ad\") " pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.787966 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cprs\" (UniqueName: \"kubernetes.io/projected/438bec48-3499-4e88-b9f1-cfb1126424ad-kube-api-access-7cprs\") pod \"redhat-marketplace-7544f\" (UID: \"438bec48-3499-4e88-b9f1-cfb1126424ad\") " pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.813752 4875 patch_prober.go:28] interesting pod/router-default-5444994796-5v2bh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:59:00 crc kubenswrapper[4875]: [-]has-synced failed: reason withheld Jan 30 16:59:00 crc kubenswrapper[4875]: [+]process-running ok Jan 30 16:59:00 crc kubenswrapper[4875]: healthz check failed Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.813833 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5v2bh" podUID="0d15a27f-97a8-4c8e-8450-5266afa2d382" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.888890 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/438bec48-3499-4e88-b9f1-cfb1126424ad-utilities\") pod \"redhat-marketplace-7544f\" (UID: \"438bec48-3499-4e88-b9f1-cfb1126424ad\") " pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.888932 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cprs\" (UniqueName: \"kubernetes.io/projected/438bec48-3499-4e88-b9f1-cfb1126424ad-kube-api-access-7cprs\") pod \"redhat-marketplace-7544f\" (UID: \"438bec48-3499-4e88-b9f1-cfb1126424ad\") " pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.888982 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/438bec48-3499-4e88-b9f1-cfb1126424ad-catalog-content\") pod \"redhat-marketplace-7544f\" (UID: \"438bec48-3499-4e88-b9f1-cfb1126424ad\") " pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.889509 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/438bec48-3499-4e88-b9f1-cfb1126424ad-catalog-content\") pod \"redhat-marketplace-7544f\" (UID: \"438bec48-3499-4e88-b9f1-cfb1126424ad\") " pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.889541 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/438bec48-3499-4e88-b9f1-cfb1126424ad-utilities\") pod \"redhat-marketplace-7544f\" (UID: \"438bec48-3499-4e88-b9f1-cfb1126424ad\") " pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 16:59:00 crc kubenswrapper[4875]: I0130 16:59:00.908499 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cprs\" (UniqueName: \"kubernetes.io/projected/438bec48-3499-4e88-b9f1-cfb1126424ad-kube-api-access-7cprs\") pod \"redhat-marketplace-7544f\" (UID: \"438bec48-3499-4e88-b9f1-cfb1126424ad\") " pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.031710 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.116549 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fgs4k"] Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.117806 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.132470 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgs4k"] Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.197358 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhp7t\" (UniqueName: \"kubernetes.io/projected/598755be-9785-4050-aa29-1904ae17e4c8-kube-api-access-dhp7t\") pod \"redhat-marketplace-fgs4k\" (UID: \"598755be-9785-4050-aa29-1904ae17e4c8\") " pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.197783 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598755be-9785-4050-aa29-1904ae17e4c8-catalog-content\") pod \"redhat-marketplace-fgs4k\" (UID: \"598755be-9785-4050-aa29-1904ae17e4c8\") " pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.198031 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598755be-9785-4050-aa29-1904ae17e4c8-utilities\") pod \"redhat-marketplace-fgs4k\" (UID: \"598755be-9785-4050-aa29-1904ae17e4c8\") " pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.228607 4875 generic.go:334] "Generic (PLEG): container finished" podID="6891de92-f1af-4dcc-bc97-c2a2a647515b" containerID="0d825bfb3b84827511243fef8ea686dc1c9e948db583f8aa11f1d10cbc20421c" exitCode=0 Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.228673 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwt6q" event={"ID":"6891de92-f1af-4dcc-bc97-c2a2a647515b","Type":"ContainerDied","Data":"0d825bfb3b84827511243fef8ea686dc1c9e948db583f8aa11f1d10cbc20421c"} Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.256860 4875 generic.go:334] "Generic (PLEG): container finished" podID="87c78ecd-3fa5-40a9-ac0d-25449555b524" containerID="c430f92ce05b14e06623324156644261e7802e6049396ae79b78953b3070baa5" exitCode=0 Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.256946 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd4tv" event={"ID":"87c78ecd-3fa5-40a9-ac0d-25449555b524","Type":"ContainerDied","Data":"c430f92ce05b14e06623324156644261e7802e6049396ae79b78953b3070baa5"} Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.264900 4875 generic.go:334] "Generic (PLEG): container finished" podID="e78733b2-73be-4247-825e-b047dbedcdd4" containerID="e5952e962e71e75402ee77708f9101fc0be6efc333b71d954b5e435e03a45325" exitCode=0 Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.264978 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e78733b2-73be-4247-825e-b047dbedcdd4","Type":"ContainerDied","Data":"e5952e962e71e75402ee77708f9101fc0be6efc333b71d954b5e435e03a45325"} Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.272255 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" event={"ID":"f681b0b0-d68c-44b4-816e-86756d55542c","Type":"ContainerStarted","Data":"792d48544d7c1edfa8852669485026dce813c7f9eab1af517b44bd593a4b6983"} Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.272302 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" event={"ID":"f681b0b0-d68c-44b4-816e-86756d55542c","Type":"ContainerStarted","Data":"c0c6e0139fc65723bc53a0f18ad9bea6d6cf90a56b6b3727432006100bdfae67"} Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.272908 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.300927 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598755be-9785-4050-aa29-1904ae17e4c8-utilities\") pod \"redhat-marketplace-fgs4k\" (UID: \"598755be-9785-4050-aa29-1904ae17e4c8\") " pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.301127 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhp7t\" (UniqueName: \"kubernetes.io/projected/598755be-9785-4050-aa29-1904ae17e4c8-kube-api-access-dhp7t\") pod \"redhat-marketplace-fgs4k\" (UID: \"598755be-9785-4050-aa29-1904ae17e4c8\") " pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.301179 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598755be-9785-4050-aa29-1904ae17e4c8-catalog-content\") pod \"redhat-marketplace-fgs4k\" (UID: \"598755be-9785-4050-aa29-1904ae17e4c8\") " pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.302560 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598755be-9785-4050-aa29-1904ae17e4c8-utilities\") pod \"redhat-marketplace-fgs4k\" (UID: \"598755be-9785-4050-aa29-1904ae17e4c8\") " pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.305388 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598755be-9785-4050-aa29-1904ae17e4c8-catalog-content\") pod \"redhat-marketplace-fgs4k\" (UID: \"598755be-9785-4050-aa29-1904ae17e4c8\") " pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.325553 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" podStartSLOduration=131.325514317 podStartE2EDuration="2m11.325514317s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:59:01.322428032 +0000 UTC m=+151.869791415" watchObservedRunningTime="2026-01-30 16:59:01.325514317 +0000 UTC m=+151.872877710" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.338736 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhp7t\" (UniqueName: \"kubernetes.io/projected/598755be-9785-4050-aa29-1904ae17e4c8-kube-api-access-dhp7t\") pod \"redhat-marketplace-fgs4k\" (UID: \"598755be-9785-4050-aa29-1904ae17e4c8\") " pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.363226 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7544f"] Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.451303 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.716834 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p7g2d"] Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.718182 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.727014 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.739485 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7g2d"] Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.744965 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgs4k"] Jan 30 16:59:01 crc kubenswrapper[4875]: W0130 16:59:01.754685 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod598755be_9785_4050_aa29_1904ae17e4c8.slice/crio-6a54a165cb52079293b0cd605d816f19cd688b205e76311ce68786ff261a297c WatchSource:0}: Error finding container 6a54a165cb52079293b0cd605d816f19cd688b205e76311ce68786ff261a297c: Status 404 returned error can't find the container with id 6a54a165cb52079293b0cd605d816f19cd688b205e76311ce68786ff261a297c Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.814679 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqv9p\" (UniqueName: \"kubernetes.io/projected/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-kube-api-access-mqv9p\") pod \"redhat-operators-p7g2d\" (UID: \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\") " pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.814734 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-catalog-content\") pod \"redhat-operators-p7g2d\" (UID: \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\") " pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.814901 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-utilities\") pod \"redhat-operators-p7g2d\" (UID: \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\") " pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.815132 4875 patch_prober.go:28] interesting pod/router-default-5444994796-5v2bh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:59:01 crc kubenswrapper[4875]: [-]has-synced failed: reason withheld Jan 30 16:59:01 crc kubenswrapper[4875]: [+]process-running ok Jan 30 16:59:01 crc kubenswrapper[4875]: healthz check failed Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.815198 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5v2bh" podUID="0d15a27f-97a8-4c8e-8450-5266afa2d382" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.916018 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-utilities\") pod \"redhat-operators-p7g2d\" (UID: \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\") " pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.916063 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqv9p\" (UniqueName: \"kubernetes.io/projected/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-kube-api-access-mqv9p\") pod \"redhat-operators-p7g2d\" (UID: \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\") " pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.916085 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-catalog-content\") pod \"redhat-operators-p7g2d\" (UID: \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\") " pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.916560 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-catalog-content\") pod \"redhat-operators-p7g2d\" (UID: \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\") " pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.916793 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-utilities\") pod \"redhat-operators-p7g2d\" (UID: \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\") " pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 16:59:01 crc kubenswrapper[4875]: I0130 16:59:01.938298 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqv9p\" (UniqueName: \"kubernetes.io/projected/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-kube-api-access-mqv9p\") pod \"redhat-operators-p7g2d\" (UID: \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\") " pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.058778 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.117796 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j4gqh"] Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.119069 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.127986 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j4gqh"] Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.221241 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e9dfb9-b895-42da-9d5d-083ffb98fc19-utilities\") pod \"redhat-operators-j4gqh\" (UID: \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\") " pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.221653 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62f4r\" (UniqueName: \"kubernetes.io/projected/67e9dfb9-b895-42da-9d5d-083ffb98fc19-kube-api-access-62f4r\") pod \"redhat-operators-j4gqh\" (UID: \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\") " pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.221709 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e9dfb9-b895-42da-9d5d-083ffb98fc19-catalog-content\") pod \"redhat-operators-j4gqh\" (UID: \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\") " pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.279640 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgs4k" event={"ID":"598755be-9785-4050-aa29-1904ae17e4c8","Type":"ContainerStarted","Data":"6a54a165cb52079293b0cd605d816f19cd688b205e76311ce68786ff261a297c"} Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.281710 4875 generic.go:334] "Generic (PLEG): container finished" podID="438bec48-3499-4e88-b9f1-cfb1126424ad" containerID="a27afb3ca094c5b4b6e24e72b8d0be184622fa17eff5e525c971e7ab09313162" exitCode=0 Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.281769 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7544f" event={"ID":"438bec48-3499-4e88-b9f1-cfb1126424ad","Type":"ContainerDied","Data":"a27afb3ca094c5b4b6e24e72b8d0be184622fa17eff5e525c971e7ab09313162"} Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.282002 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7544f" event={"ID":"438bec48-3499-4e88-b9f1-cfb1126424ad","Type":"ContainerStarted","Data":"da67e34deaafa6984490230b44e258a97d103f663e34c6bff452852edf260e81"} Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.293431 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7g2d"] Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.322875 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e9dfb9-b895-42da-9d5d-083ffb98fc19-utilities\") pod \"redhat-operators-j4gqh\" (UID: \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\") " pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.322937 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62f4r\" (UniqueName: \"kubernetes.io/projected/67e9dfb9-b895-42da-9d5d-083ffb98fc19-kube-api-access-62f4r\") pod \"redhat-operators-j4gqh\" (UID: \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\") " pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.323015 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e9dfb9-b895-42da-9d5d-083ffb98fc19-catalog-content\") pod \"redhat-operators-j4gqh\" (UID: \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\") " pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.323459 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e9dfb9-b895-42da-9d5d-083ffb98fc19-utilities\") pod \"redhat-operators-j4gqh\" (UID: \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\") " pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.323463 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e9dfb9-b895-42da-9d5d-083ffb98fc19-catalog-content\") pod \"redhat-operators-j4gqh\" (UID: \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\") " pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.366393 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62f4r\" (UniqueName: \"kubernetes.io/projected/67e9dfb9-b895-42da-9d5d-083ffb98fc19-kube-api-access-62f4r\") pod \"redhat-operators-j4gqh\" (UID: \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\") " pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.458963 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.459020 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.460768 4875 patch_prober.go:28] interesting pod/console-f9d7485db-7s4zv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.460833 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-7s4zv" podUID="37fa5454-ad47-4960-be87-5d9d4e4eab0f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.486488 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.516598 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.613528 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.629433 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e78733b2-73be-4247-825e-b047dbedcdd4-kube-api-access\") pod \"e78733b2-73be-4247-825e-b047dbedcdd4\" (UID: \"e78733b2-73be-4247-825e-b047dbedcdd4\") " Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.629555 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e78733b2-73be-4247-825e-b047dbedcdd4-kubelet-dir\") pod \"e78733b2-73be-4247-825e-b047dbedcdd4\" (UID: \"e78733b2-73be-4247-825e-b047dbedcdd4\") " Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.629725 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e78733b2-73be-4247-825e-b047dbedcdd4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e78733b2-73be-4247-825e-b047dbedcdd4" (UID: "e78733b2-73be-4247-825e-b047dbedcdd4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.630131 4875 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e78733b2-73be-4247-825e-b047dbedcdd4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.643312 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78733b2-73be-4247-825e-b047dbedcdd4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e78733b2-73be-4247-825e-b047dbedcdd4" (UID: "e78733b2-73be-4247-825e-b047dbedcdd4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.731210 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e78733b2-73be-4247-825e-b047dbedcdd4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.743261 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.743309 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.753880 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j4gqh"] Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.761969 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.810739 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.813550 4875 patch_prober.go:28] interesting pod/router-default-5444994796-5v2bh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:59:02 crc kubenswrapper[4875]: [-]has-synced failed: reason withheld Jan 30 16:59:02 crc kubenswrapper[4875]: [+]process-running ok Jan 30 16:59:02 crc kubenswrapper[4875]: healthz check failed Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.813608 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5v2bh" podUID="0d15a27f-97a8-4c8e-8450-5266afa2d382" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.814918 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-stgmg" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.825005 4875 patch_prober.go:28] interesting pod/downloads-7954f5f757-qc97s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.38:8080/\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.825060 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qc97s" podUID="69e24be4-7935-43ce-9815-ed1fa40e9933" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.38:8080/\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.825081 4875 patch_prober.go:28] interesting pod/downloads-7954f5f757-qc97s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 30 16:59:02 crc kubenswrapper[4875]: I0130 16:59:02.825107 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qc97s" podUID="69e24be4-7935-43ce-9815-ed1fa40e9933" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.38:8080/\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.293700 4875 generic.go:334] "Generic (PLEG): container finished" podID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" containerID="b668a8d22912ceed4e6196452d7fe76d12c771c427ac3f79e290b4a04e1d73d7" exitCode=0 Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.293992 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7g2d" event={"ID":"926bc7fe-7fc5-4f59-b161-f32ff75b40b3","Type":"ContainerDied","Data":"b668a8d22912ceed4e6196452d7fe76d12c771c427ac3f79e290b4a04e1d73d7"} Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.294017 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7g2d" event={"ID":"926bc7fe-7fc5-4f59-b161-f32ff75b40b3","Type":"ContainerStarted","Data":"1ccfa79f68248134b5bc68a99f7892f553f961a1bc328ee718b7b56e45bcb4b7"} Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.302563 4875 generic.go:334] "Generic (PLEG): container finished" podID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" containerID="49ded4fed9990548b8a4b3bb0cb0257946aa2f6ec8c0490251e4d379ba4bf698" exitCode=0 Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.302637 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gqh" event={"ID":"67e9dfb9-b895-42da-9d5d-083ffb98fc19","Type":"ContainerDied","Data":"49ded4fed9990548b8a4b3bb0cb0257946aa2f6ec8c0490251e4d379ba4bf698"} Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.302661 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gqh" event={"ID":"67e9dfb9-b895-42da-9d5d-083ffb98fc19","Type":"ContainerStarted","Data":"eec1b745bb867e953874948726d1e6541b4137333383b1b7e6c9015dcfb84adc"} Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.308179 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e78733b2-73be-4247-825e-b047dbedcdd4","Type":"ContainerDied","Data":"b4ca5210aac1981359b3a2c37b5f3217a2d82b43ad1e571f789c415a2252e238"} Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.308276 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4ca5210aac1981359b3a2c37b5f3217a2d82b43ad1e571f789c415a2252e238" Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.308321 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.325841 4875 generic.go:334] "Generic (PLEG): container finished" podID="598755be-9785-4050-aa29-1904ae17e4c8" containerID="293e51b261531556c351ed9e2f2bc4f68dac7c73c7916e27fd02324d740a0e3b" exitCode=0 Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.326508 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgs4k" event={"ID":"598755be-9785-4050-aa29-1904ae17e4c8","Type":"ContainerDied","Data":"293e51b261531556c351ed9e2f2bc4f68dac7c73c7916e27fd02324d740a0e3b"} Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.337775 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-2d4sj" Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.815764 4875 patch_prober.go:28] interesting pod/router-default-5444994796-5v2bh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:59:03 crc kubenswrapper[4875]: [-]has-synced failed: reason withheld Jan 30 16:59:03 crc kubenswrapper[4875]: [+]process-running ok Jan 30 16:59:03 crc kubenswrapper[4875]: healthz check failed Jan 30 16:59:03 crc kubenswrapper[4875]: I0130 16:59:03.815833 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5v2bh" podUID="0d15a27f-97a8-4c8e-8450-5266afa2d382" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:59:04 crc kubenswrapper[4875]: I0130 16:59:04.814932 4875 patch_prober.go:28] interesting pod/router-default-5444994796-5v2bh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:59:04 crc kubenswrapper[4875]: [-]has-synced failed: reason withheld Jan 30 16:59:04 crc kubenswrapper[4875]: [+]process-running ok Jan 30 16:59:04 crc kubenswrapper[4875]: healthz check failed Jan 30 16:59:04 crc kubenswrapper[4875]: I0130 16:59:04.815198 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5v2bh" podUID="0d15a27f-97a8-4c8e-8450-5266afa2d382" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:59:05 crc kubenswrapper[4875]: I0130 16:59:05.815435 4875 patch_prober.go:28] interesting pod/router-default-5444994796-5v2bh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:59:05 crc kubenswrapper[4875]: [-]has-synced failed: reason withheld Jan 30 16:59:05 crc kubenswrapper[4875]: [+]process-running ok Jan 30 16:59:05 crc kubenswrapper[4875]: healthz check failed Jan 30 16:59:05 crc kubenswrapper[4875]: I0130 16:59:05.815517 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5v2bh" podUID="0d15a27f-97a8-4c8e-8450-5266afa2d382" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.633299 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 16:59:06 crc kubenswrapper[4875]: E0130 16:59:06.640598 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78733b2-73be-4247-825e-b047dbedcdd4" containerName="pruner" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.640932 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78733b2-73be-4247-825e-b047dbedcdd4" containerName="pruner" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.641259 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="e78733b2-73be-4247-825e-b047dbedcdd4" containerName="pruner" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.641880 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.642126 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.645833 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.647909 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.719910 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77818e70-389b-449b-829d-2fd4f3c49045-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"77818e70-389b-449b-829d-2fd4f3c49045\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.720080 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77818e70-389b-449b-829d-2fd4f3c49045-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"77818e70-389b-449b-829d-2fd4f3c49045\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.814286 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.819333 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5v2bh" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.822191 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77818e70-389b-449b-829d-2fd4f3c49045-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"77818e70-389b-449b-829d-2fd4f3c49045\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.822444 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77818e70-389b-449b-829d-2fd4f3c49045-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"77818e70-389b-449b-829d-2fd4f3c49045\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.823315 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77818e70-389b-449b-829d-2fd4f3c49045-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"77818e70-389b-449b-829d-2fd4f3c49045\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.863940 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77818e70-389b-449b-829d-2fd4f3c49045-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"77818e70-389b-449b-829d-2fd4f3c49045\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:59:06 crc kubenswrapper[4875]: I0130 16:59:06.977222 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:59:07 crc kubenswrapper[4875]: I0130 16:59:07.724103 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mfrmm" Jan 30 16:59:12 crc kubenswrapper[4875]: I0130 16:59:12.489787 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:59:12 crc kubenswrapper[4875]: I0130 16:59:12.495108 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 16:59:12 crc kubenswrapper[4875]: I0130 16:59:12.611666 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:59:12 crc kubenswrapper[4875]: I0130 16:59:12.627875 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64282947-3e36-453a-b460-ada872b157c9-metrics-certs\") pod \"network-metrics-daemon-ptnnq\" (UID: \"64282947-3e36-453a-b460-ada872b157c9\") " pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:59:12 crc kubenswrapper[4875]: I0130 16:59:12.841275 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-qc97s" Jan 30 16:59:12 crc kubenswrapper[4875]: I0130 16:59:12.878078 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ptnnq" Jan 30 16:59:17 crc kubenswrapper[4875]: I0130 16:59:17.982036 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 16:59:18 crc kubenswrapper[4875]: I0130 16:59:18.041957 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ptnnq"] Jan 30 16:59:18 crc kubenswrapper[4875]: W0130 16:59:18.058450 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64282947_3e36_453a_b460_ada872b157c9.slice/crio-4b1cffcf7ffc1ac1216c27dad222c30a91bbf6f44bc9bc021ac173de0ad4a431 WatchSource:0}: Error finding container 4b1cffcf7ffc1ac1216c27dad222c30a91bbf6f44bc9bc021ac173de0ad4a431: Status 404 returned error can't find the container with id 4b1cffcf7ffc1ac1216c27dad222c30a91bbf6f44bc9bc021ac173de0ad4a431 Jan 30 16:59:18 crc kubenswrapper[4875]: I0130 16:59:18.418513 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"77818e70-389b-449b-829d-2fd4f3c49045","Type":"ContainerStarted","Data":"3e7c06a24db3a65a154683763a374fe7eee7bf9d1b089bbe9ff6563f04477b1a"} Jan 30 16:59:18 crc kubenswrapper[4875]: I0130 16:59:18.419790 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" event={"ID":"64282947-3e36-453a-b460-ada872b157c9","Type":"ContainerStarted","Data":"4b1cffcf7ffc1ac1216c27dad222c30a91bbf6f44bc9bc021ac173de0ad4a431"} Jan 30 16:59:18 crc kubenswrapper[4875]: I0130 16:59:18.881640 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qtgzv"] Jan 30 16:59:18 crc kubenswrapper[4875]: I0130 16:59:18.881943 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" podUID="b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" containerName="controller-manager" containerID="cri-o://4062a8596051612270e4d7f53be7c400b8c427f4690f6ffd505d43171bb545dc" gracePeriod=30 Jan 30 16:59:18 crc kubenswrapper[4875]: I0130 16:59:18.891723 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf"] Jan 30 16:59:18 crc kubenswrapper[4875]: I0130 16:59:18.891972 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" podUID="e1d4e20b-8815-42d1-b8e3-8d0f67d73860" containerName="route-controller-manager" containerID="cri-o://813340ee1ae349b91deab35ede41b17df4ef1d45139276599da9bd490d1cba4b" gracePeriod=30 Jan 30 16:59:19 crc kubenswrapper[4875]: I0130 16:59:19.427345 4875 generic.go:334] "Generic (PLEG): container finished" podID="e1d4e20b-8815-42d1-b8e3-8d0f67d73860" containerID="813340ee1ae349b91deab35ede41b17df4ef1d45139276599da9bd490d1cba4b" exitCode=0 Jan 30 16:59:19 crc kubenswrapper[4875]: I0130 16:59:19.427465 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" event={"ID":"e1d4e20b-8815-42d1-b8e3-8d0f67d73860","Type":"ContainerDied","Data":"813340ee1ae349b91deab35ede41b17df4ef1d45139276599da9bd490d1cba4b"} Jan 30 16:59:19 crc kubenswrapper[4875]: I0130 16:59:19.429168 4875 generic.go:334] "Generic (PLEG): container finished" podID="b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" containerID="4062a8596051612270e4d7f53be7c400b8c427f4690f6ffd505d43171bb545dc" exitCode=0 Jan 30 16:59:19 crc kubenswrapper[4875]: I0130 16:59:19.429258 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" event={"ID":"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec","Type":"ContainerDied","Data":"4062a8596051612270e4d7f53be7c400b8c427f4690f6ffd505d43171bb545dc"} Jan 30 16:59:19 crc kubenswrapper[4875]: I0130 16:59:19.430787 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" event={"ID":"64282947-3e36-453a-b460-ada872b157c9","Type":"ContainerStarted","Data":"bd2fc76bf9963969011988a6e0fb1d8de145e3e9611abfe1dd01f8178796f881"} Jan 30 16:59:19 crc kubenswrapper[4875]: I0130 16:59:19.432315 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"77818e70-389b-449b-829d-2fd4f3c49045","Type":"ContainerStarted","Data":"69ae10f76672b38e60f6815981321b0944f456da8932a47e74155da8309f96e3"} Jan 30 16:59:19 crc kubenswrapper[4875]: I0130 16:59:19.453108 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=13.453090227 podStartE2EDuration="13.453090227s" podCreationTimestamp="2026-01-30 16:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:59:19.450892019 +0000 UTC m=+169.998255482" watchObservedRunningTime="2026-01-30 16:59:19.453090227 +0000 UTC m=+170.000453610" Jan 30 16:59:20 crc kubenswrapper[4875]: I0130 16:59:20.224820 4875 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-m6fdf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 16:59:20 crc kubenswrapper[4875]: I0130 16:59:20.224919 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" podUID="e1d4e20b-8815-42d1-b8e3-8d0f67d73860" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 16:59:20 crc kubenswrapper[4875]: I0130 16:59:20.237265 4875 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qtgzv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 30 16:59:20 crc kubenswrapper[4875]: I0130 16:59:20.237348 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" podUID="b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 30 16:59:20 crc kubenswrapper[4875]: I0130 16:59:20.296744 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 16:59:26 crc kubenswrapper[4875]: I0130 16:59:26.287520 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:59:26 crc kubenswrapper[4875]: I0130 16:59:26.287969 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:59:30 crc kubenswrapper[4875]: I0130 16:59:30.224561 4875 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-m6fdf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 16:59:30 crc kubenswrapper[4875]: I0130 16:59:30.225335 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" podUID="e1d4e20b-8815-42d1-b8e3-8d0f67d73860" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 16:59:30 crc kubenswrapper[4875]: I0130 16:59:30.493660 4875 generic.go:334] "Generic (PLEG): container finished" podID="77818e70-389b-449b-829d-2fd4f3c49045" containerID="69ae10f76672b38e60f6815981321b0944f456da8932a47e74155da8309f96e3" exitCode=0 Jan 30 16:59:30 crc kubenswrapper[4875]: I0130 16:59:30.493730 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"77818e70-389b-449b-829d-2fd4f3c49045","Type":"ContainerDied","Data":"69ae10f76672b38e60f6815981321b0944f456da8932a47e74155da8309f96e3"} Jan 30 16:59:31 crc kubenswrapper[4875]: I0130 16:59:31.237620 4875 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-qtgzv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 16:59:31 crc kubenswrapper[4875]: I0130 16:59:31.237693 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" podUID="b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 16:59:32 crc kubenswrapper[4875]: I0130 16:59:32.933963 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s24dp" Jan 30 16:59:33 crc kubenswrapper[4875]: E0130 16:59:33.651035 4875 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 16:59:33 crc kubenswrapper[4875]: E0130 16:59:33.651417 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zgpks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vwt6q_openshift-marketplace(6891de92-f1af-4dcc-bc97-c2a2a647515b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:59:33 crc kubenswrapper[4875]: E0130 16:59:33.652805 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vwt6q" podUID="6891de92-f1af-4dcc-bc97-c2a2a647515b" Jan 30 16:59:34 crc kubenswrapper[4875]: E0130 16:59:34.025918 4875 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 16:59:34 crc kubenswrapper[4875]: E0130 16:59:34.026087 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cprs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-7544f_openshift-marketplace(438bec48-3499-4e88-b9f1-cfb1126424ad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:59:34 crc kubenswrapper[4875]: E0130 16:59:34.027298 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-7544f" podUID="438bec48-3499-4e88-b9f1-cfb1126424ad" Jan 30 16:59:35 crc kubenswrapper[4875]: E0130 16:59:35.347447 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-7544f" podUID="438bec48-3499-4e88-b9f1-cfb1126424ad" Jan 30 16:59:35 crc kubenswrapper[4875]: E0130 16:59:35.347461 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vwt6q" podUID="6891de92-f1af-4dcc-bc97-c2a2a647515b" Jan 30 16:59:35 crc kubenswrapper[4875]: E0130 16:59:35.488634 4875 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 16:59:35 crc kubenswrapper[4875]: E0130 16:59:35.489136 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhv4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sd4tv_openshift-marketplace(87c78ecd-3fa5-40a9-ac0d-25449555b524): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:59:35 crc kubenswrapper[4875]: E0130 16:59:35.490327 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sd4tv" podUID="87c78ecd-3fa5-40a9-ac0d-25449555b524" Jan 30 16:59:35 crc kubenswrapper[4875]: E0130 16:59:35.515217 4875 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 16:59:35 crc kubenswrapper[4875]: E0130 16:59:35.515390 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhp7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fgs4k_openshift-marketplace(598755be-9785-4050-aa29-1904ae17e4c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:59:35 crc kubenswrapper[4875]: E0130 16:59:35.516699 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fgs4k" podUID="598755be-9785-4050-aa29-1904ae17e4c8" Jan 30 16:59:38 crc kubenswrapper[4875]: I0130 16:59:38.160484 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:59:38 crc kubenswrapper[4875]: E0130 16:59:38.761443 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sd4tv" podUID="87c78ecd-3fa5-40a9-ac0d-25449555b524" Jan 30 16:59:38 crc kubenswrapper[4875]: E0130 16:59:38.762519 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fgs4k" podUID="598755be-9785-4050-aa29-1904ae17e4c8" Jan 30 16:59:38 crc kubenswrapper[4875]: E0130 16:59:38.874146 4875 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 16:59:38 crc kubenswrapper[4875]: E0130 16:59:38.874406 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqv9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-p7g2d_openshift-marketplace(926bc7fe-7fc5-4f59-b161-f32ff75b40b3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:59:38 crc kubenswrapper[4875]: E0130 16:59:38.876720 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-p7g2d" podUID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" Jan 30 16:59:38 crc kubenswrapper[4875]: I0130 16:59:38.921958 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:59:38 crc kubenswrapper[4875]: I0130 16:59:38.929440 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:59:38 crc kubenswrapper[4875]: E0130 16:59:38.932016 4875 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 16:59:38 crc kubenswrapper[4875]: E0130 16:59:38.932199 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-smvmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tz4fm_openshift-marketplace(228882df-4f66-4157-836b-f95a581fe216): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:59:38 crc kubenswrapper[4875]: E0130 16:59:38.933420 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tz4fm" podUID="228882df-4f66-4157-836b-f95a581fe216" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.025257 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77818e70-389b-449b-829d-2fd4f3c49045-kubelet-dir\") pod \"77818e70-389b-449b-829d-2fd4f3c49045\" (UID: \"77818e70-389b-449b-829d-2fd4f3c49045\") " Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.025310 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-serving-cert\") pod \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.025370 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dn8g\" (UniqueName: \"kubernetes.io/projected/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-kube-api-access-2dn8g\") pod \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.025432 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77818e70-389b-449b-829d-2fd4f3c49045-kube-api-access\") pod \"77818e70-389b-449b-829d-2fd4f3c49045\" (UID: \"77818e70-389b-449b-829d-2fd4f3c49045\") " Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.025466 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-proxy-ca-bundles\") pod \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.025470 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77818e70-389b-449b-829d-2fd4f3c49045-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "77818e70-389b-449b-829d-2fd4f3c49045" (UID: "77818e70-389b-449b-829d-2fd4f3c49045"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.025499 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-config\") pod \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.025552 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-client-ca\") pod \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\" (UID: \"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec\") " Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.025887 4875 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77818e70-389b-449b-829d-2fd4f3c49045-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.033028 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-client-ca" (OuterVolumeSpecName: "client-ca") pod "b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" (UID: "b48b7a95-33c5-4ba6-a827-1fc5b36d49ec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.033128 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" (UID: "b48b7a95-33c5-4ba6-a827-1fc5b36d49ec"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.034709 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-config" (OuterVolumeSpecName: "config") pod "b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" (UID: "b48b7a95-33c5-4ba6-a827-1fc5b36d49ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.035265 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-kube-api-access-2dn8g" (OuterVolumeSpecName: "kube-api-access-2dn8g") pod "b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" (UID: "b48b7a95-33c5-4ba6-a827-1fc5b36d49ec"). InnerVolumeSpecName "kube-api-access-2dn8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.042015 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" (UID: "b48b7a95-33c5-4ba6-a827-1fc5b36d49ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.042572 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77818e70-389b-449b-829d-2fd4f3c49045-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "77818e70-389b-449b-829d-2fd4f3c49045" (UID: "77818e70-389b-449b-829d-2fd4f3c49045"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:59:39 crc kubenswrapper[4875]: E0130 16:59:39.071926 4875 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 16:59:39 crc kubenswrapper[4875]: E0130 16:59:39.072081 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62f4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-j4gqh_openshift-marketplace(67e9dfb9-b895-42da-9d5d-083ffb98fc19): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:59:39 crc kubenswrapper[4875]: E0130 16:59:39.073307 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-j4gqh" podUID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.127518 4875 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.127541 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.127550 4875 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.127558 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.127569 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dn8g\" (UniqueName: \"kubernetes.io/projected/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec-kube-api-access-2dn8g\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.127594 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77818e70-389b-449b-829d-2fd4f3c49045-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.188216 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.228913 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-config\") pod \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.228973 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf77x\" (UniqueName: \"kubernetes.io/projected/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-kube-api-access-bf77x\") pod \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.229017 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-serving-cert\") pod \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.229056 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-client-ca\") pod \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\" (UID: \"e1d4e20b-8815-42d1-b8e3-8d0f67d73860\") " Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.229949 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-client-ca" (OuterVolumeSpecName: "client-ca") pod "e1d4e20b-8815-42d1-b8e3-8d0f67d73860" (UID: "e1d4e20b-8815-42d1-b8e3-8d0f67d73860"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.229970 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-config" (OuterVolumeSpecName: "config") pod "e1d4e20b-8815-42d1-b8e3-8d0f67d73860" (UID: "e1d4e20b-8815-42d1-b8e3-8d0f67d73860"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.234179 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-kube-api-access-bf77x" (OuterVolumeSpecName: "kube-api-access-bf77x") pod "e1d4e20b-8815-42d1-b8e3-8d0f67d73860" (UID: "e1d4e20b-8815-42d1-b8e3-8d0f67d73860"). InnerVolumeSpecName "kube-api-access-bf77x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.234557 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e1d4e20b-8815-42d1-b8e3-8d0f67d73860" (UID: "e1d4e20b-8815-42d1-b8e3-8d0f67d73860"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.330680 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.330713 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf77x\" (UniqueName: \"kubernetes.io/projected/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-kube-api-access-bf77x\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.330726 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.330737 4875 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1d4e20b-8815-42d1-b8e3-8d0f67d73860-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.537879 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"77818e70-389b-449b-829d-2fd4f3c49045","Type":"ContainerDied","Data":"3e7c06a24db3a65a154683763a374fe7eee7bf9d1b089bbe9ff6563f04477b1a"} Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.537927 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e7c06a24db3a65a154683763a374fe7eee7bf9d1b089bbe9ff6563f04477b1a" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.537985 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.542038 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" event={"ID":"e1d4e20b-8815-42d1-b8e3-8d0f67d73860","Type":"ContainerDied","Data":"d558c57d5d38ea317b6f8fc68ab83b7d7cf4a702d1dc9412c55283deeb99f100"} Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.542082 4875 scope.go:117] "RemoveContainer" containerID="813340ee1ae349b91deab35ede41b17df4ef1d45139276599da9bd490d1cba4b" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.542177 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.545003 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdr7w" event={"ID":"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79","Type":"ContainerStarted","Data":"f74a605721a9fa5216417b1df2dbb6ccaf93a462126d2effb4f9ebcef4f54d29"} Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.554122 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" event={"ID":"b48b7a95-33c5-4ba6-a827-1fc5b36d49ec","Type":"ContainerDied","Data":"90e84a0382fa26d5169143f45818d00fbf5cc99fb600a96d75ae702cd1aea043"} Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.554145 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-qtgzv" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.556195 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ptnnq" event={"ID":"64282947-3e36-453a-b460-ada872b157c9","Type":"ContainerStarted","Data":"10799f0dea2a102487dee13a3c6d3c796d40822fe64b06e2ae952e3cefa59f1e"} Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.562627 4875 scope.go:117] "RemoveContainer" containerID="4062a8596051612270e4d7f53be7c400b8c427f4690f6ffd505d43171bb545dc" Jan 30 16:59:39 crc kubenswrapper[4875]: E0130 16:59:39.563048 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-p7g2d" podUID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" Jan 30 16:59:39 crc kubenswrapper[4875]: E0130 16:59:39.563297 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-j4gqh" podUID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" Jan 30 16:59:39 crc kubenswrapper[4875]: E0130 16:59:39.564623 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tz4fm" podUID="228882df-4f66-4157-836b-f95a581fe216" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.629384 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf"] Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.636014 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m6fdf"] Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.646932 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-ptnnq" podStartSLOduration=169.646900143 podStartE2EDuration="2m49.646900143s" podCreationTimestamp="2026-01-30 16:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:59:39.646415888 +0000 UTC m=+190.193779271" watchObservedRunningTime="2026-01-30 16:59:39.646900143 +0000 UTC m=+190.194263526" Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.683724 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qtgzv"] Jan 30 16:59:39 crc kubenswrapper[4875]: I0130 16:59:39.687882 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-qtgzv"] Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.151541 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" path="/var/lib/kubelet/pods/b48b7a95-33c5-4ba6-a827-1fc5b36d49ec/volumes" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.152075 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d4e20b-8815-42d1-b8e3-8d0f67d73860" path="/var/lib/kubelet/pods/e1d4e20b-8815-42d1-b8e3-8d0f67d73860/volumes" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.409978 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78b8d4c749-cxpcj"] Jan 30 16:59:40 crc kubenswrapper[4875]: E0130 16:59:40.410219 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77818e70-389b-449b-829d-2fd4f3c49045" containerName="pruner" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.410231 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="77818e70-389b-449b-829d-2fd4f3c49045" containerName="pruner" Jan 30 16:59:40 crc kubenswrapper[4875]: E0130 16:59:40.410247 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d4e20b-8815-42d1-b8e3-8d0f67d73860" containerName="route-controller-manager" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.410254 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d4e20b-8815-42d1-b8e3-8d0f67d73860" containerName="route-controller-manager" Jan 30 16:59:40 crc kubenswrapper[4875]: E0130 16:59:40.410267 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" containerName="controller-manager" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.410273 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" containerName="controller-manager" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.410377 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="77818e70-389b-449b-829d-2fd4f3c49045" containerName="pruner" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.410389 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1d4e20b-8815-42d1-b8e3-8d0f67d73860" containerName="route-controller-manager" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.410432 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48b7a95-33c5-4ba6-a827-1fc5b36d49ec" containerName="controller-manager" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.411107 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.413570 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.414085 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.414720 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.414955 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.415510 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.415629 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb"] Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.416205 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.416401 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.418262 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.418471 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.418870 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb"] Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.422533 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.422972 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.423123 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.423468 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.424133 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.424697 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78b8d4c749-cxpcj"] Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.542405 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-config\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.542489 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5d68360-5372-4bc8-a494-6370581cefd1-config\") pod \"route-controller-manager-6d57d459b7-d8qrb\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.542522 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5d68360-5372-4bc8-a494-6370581cefd1-client-ca\") pod \"route-controller-manager-6d57d459b7-d8qrb\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.542752 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-proxy-ca-bundles\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.542806 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d65bf20-1460-416c-84db-c69ee083a4c6-serving-cert\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.542864 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-client-ca\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.542927 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99gzg\" (UniqueName: \"kubernetes.io/projected/f5d68360-5372-4bc8-a494-6370581cefd1-kube-api-access-99gzg\") pod \"route-controller-manager-6d57d459b7-d8qrb\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.543004 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csx5c\" (UniqueName: \"kubernetes.io/projected/7d65bf20-1460-416c-84db-c69ee083a4c6-kube-api-access-csx5c\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.543046 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5d68360-5372-4bc8-a494-6370581cefd1-serving-cert\") pod \"route-controller-manager-6d57d459b7-d8qrb\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.562939 4875 generic.go:334] "Generic (PLEG): container finished" podID="0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" containerID="f74a605721a9fa5216417b1df2dbb6ccaf93a462126d2effb4f9ebcef4f54d29" exitCode=0 Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.563004 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdr7w" event={"ID":"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79","Type":"ContainerDied","Data":"f74a605721a9fa5216417b1df2dbb6ccaf93a462126d2effb4f9ebcef4f54d29"} Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.644212 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5d68360-5372-4bc8-a494-6370581cefd1-config\") pod \"route-controller-manager-6d57d459b7-d8qrb\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.644250 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5d68360-5372-4bc8-a494-6370581cefd1-client-ca\") pod \"route-controller-manager-6d57d459b7-d8qrb\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.644275 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-proxy-ca-bundles\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.645268 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5d68360-5372-4bc8-a494-6370581cefd1-client-ca\") pod \"route-controller-manager-6d57d459b7-d8qrb\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.645337 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d65bf20-1460-416c-84db-c69ee083a4c6-serving-cert\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.645666 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5d68360-5372-4bc8-a494-6370581cefd1-config\") pod \"route-controller-manager-6d57d459b7-d8qrb\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.645886 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-proxy-ca-bundles\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.645998 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-client-ca\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.646050 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99gzg\" (UniqueName: \"kubernetes.io/projected/f5d68360-5372-4bc8-a494-6370581cefd1-kube-api-access-99gzg\") pod \"route-controller-manager-6d57d459b7-d8qrb\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.646077 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csx5c\" (UniqueName: \"kubernetes.io/projected/7d65bf20-1460-416c-84db-c69ee083a4c6-kube-api-access-csx5c\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.646098 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5d68360-5372-4bc8-a494-6370581cefd1-serving-cert\") pod \"route-controller-manager-6d57d459b7-d8qrb\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.646122 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-config\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.647274 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-config\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.648483 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-client-ca\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.651536 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d65bf20-1460-416c-84db-c69ee083a4c6-serving-cert\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.651676 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5d68360-5372-4bc8-a494-6370581cefd1-serving-cert\") pod \"route-controller-manager-6d57d459b7-d8qrb\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.663239 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csx5c\" (UniqueName: \"kubernetes.io/projected/7d65bf20-1460-416c-84db-c69ee083a4c6-kube-api-access-csx5c\") pod \"controller-manager-78b8d4c749-cxpcj\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.665081 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99gzg\" (UniqueName: \"kubernetes.io/projected/f5d68360-5372-4bc8-a494-6370581cefd1-kube-api-access-99gzg\") pod \"route-controller-manager-6d57d459b7-d8qrb\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.746425 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:40 crc kubenswrapper[4875]: I0130 16:59:40.757863 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.178845 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb"] Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.184138 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78b8d4c749-cxpcj"] Jan 30 16:59:41 crc kubenswrapper[4875]: W0130 16:59:41.185078 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5d68360_5372_4bc8_a494_6370581cefd1.slice/crio-58693aa1a50b7be5e71498ddc0c43992a47d123548f38afee728917c6456e51e WatchSource:0}: Error finding container 58693aa1a50b7be5e71498ddc0c43992a47d123548f38afee728917c6456e51e: Status 404 returned error can't find the container with id 58693aa1a50b7be5e71498ddc0c43992a47d123548f38afee728917c6456e51e Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.571790 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" event={"ID":"f5d68360-5372-4bc8-a494-6370581cefd1","Type":"ContainerStarted","Data":"c18550abf994f90988712214569049c844af2b158020ecf4fddb1d5fce4f161c"} Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.571850 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" event={"ID":"f5d68360-5372-4bc8-a494-6370581cefd1","Type":"ContainerStarted","Data":"58693aa1a50b7be5e71498ddc0c43992a47d123548f38afee728917c6456e51e"} Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.572089 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.574796 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" event={"ID":"7d65bf20-1460-416c-84db-c69ee083a4c6","Type":"ContainerStarted","Data":"3f7dbcd3d6b7dd0c4340622ea9a0971a265828b0055f000473119bbb5aaceb11"} Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.574838 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" event={"ID":"7d65bf20-1460-416c-84db-c69ee083a4c6","Type":"ContainerStarted","Data":"fe5b4d7e33cd8faeae01576e8f59122bc6e7ce9fa6d65801962522601090722a"} Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.574999 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.577087 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdr7w" event={"ID":"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79","Type":"ContainerStarted","Data":"2d21bd1e721cbfba8e6c1fd3bc4941ebcb69277e211dc78d389d84a935f270c6"} Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.580800 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.598516 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" podStartSLOduration=3.598495108 podStartE2EDuration="3.598495108s" podCreationTimestamp="2026-01-30 16:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:59:41.597314992 +0000 UTC m=+192.144678375" watchObservedRunningTime="2026-01-30 16:59:41.598495108 +0000 UTC m=+192.145858501" Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.614523 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pdr7w" podStartSLOduration=2.76686997 podStartE2EDuration="43.614503495s" podCreationTimestamp="2026-01-30 16:58:58 +0000 UTC" firstStartedPulling="2026-01-30 16:59:00.096757227 +0000 UTC m=+150.644120610" lastFinishedPulling="2026-01-30 16:59:40.944390742 +0000 UTC m=+191.491754135" observedRunningTime="2026-01-30 16:59:41.614083591 +0000 UTC m=+192.161446984" watchObservedRunningTime="2026-01-30 16:59:41.614503495 +0000 UTC m=+192.161866878" Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.699397 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" podStartSLOduration=3.699373183 podStartE2EDuration="3.699373183s" podCreationTimestamp="2026-01-30 16:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:59:41.654861194 +0000 UTC m=+192.202224577" watchObservedRunningTime="2026-01-30 16:59:41.699373183 +0000 UTC m=+192.246736566" Jan 30 16:59:41 crc kubenswrapper[4875]: I0130 16:59:41.885612 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:42 crc kubenswrapper[4875]: I0130 16:59:42.222989 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 16:59:42 crc kubenswrapper[4875]: I0130 16:59:42.223944 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:59:42 crc kubenswrapper[4875]: I0130 16:59:42.226472 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 16:59:42 crc kubenswrapper[4875]: I0130 16:59:42.226568 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 16:59:42 crc kubenswrapper[4875]: I0130 16:59:42.235303 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 16:59:42 crc kubenswrapper[4875]: I0130 16:59:42.263415 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9451bc0a-d812-4ab9-b7b5-e9e5f8052141-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9451bc0a-d812-4ab9-b7b5-e9e5f8052141\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:59:42 crc kubenswrapper[4875]: I0130 16:59:42.263683 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9451bc0a-d812-4ab9-b7b5-e9e5f8052141-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9451bc0a-d812-4ab9-b7b5-e9e5f8052141\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:59:42 crc kubenswrapper[4875]: I0130 16:59:42.364400 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9451bc0a-d812-4ab9-b7b5-e9e5f8052141-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9451bc0a-d812-4ab9-b7b5-e9e5f8052141\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:59:42 crc kubenswrapper[4875]: I0130 16:59:42.364437 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9451bc0a-d812-4ab9-b7b5-e9e5f8052141-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9451bc0a-d812-4ab9-b7b5-e9e5f8052141\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:59:42 crc kubenswrapper[4875]: I0130 16:59:42.364527 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9451bc0a-d812-4ab9-b7b5-e9e5f8052141-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9451bc0a-d812-4ab9-b7b5-e9e5f8052141\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:59:42 crc kubenswrapper[4875]: I0130 16:59:42.389931 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9451bc0a-d812-4ab9-b7b5-e9e5f8052141-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9451bc0a-d812-4ab9-b7b5-e9e5f8052141\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:59:42 crc kubenswrapper[4875]: I0130 16:59:42.545752 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:59:42 crc kubenswrapper[4875]: I0130 16:59:42.959678 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 16:59:43 crc kubenswrapper[4875]: I0130 16:59:43.589118 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9451bc0a-d812-4ab9-b7b5-e9e5f8052141","Type":"ContainerStarted","Data":"9de6c5d67365e4d48aec046948cd01bc6b8a9114fd0c2f26e0dfe0c2c3be5264"} Jan 30 16:59:43 crc kubenswrapper[4875]: I0130 16:59:43.589478 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9451bc0a-d812-4ab9-b7b5-e9e5f8052141","Type":"ContainerStarted","Data":"c2fe45a83dc11f94d0e66c3347ff85a61153accbca5849817014d81274e11ec4"} Jan 30 16:59:43 crc kubenswrapper[4875]: I0130 16:59:43.603473 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.603452097 podStartE2EDuration="1.603452097s" podCreationTimestamp="2026-01-30 16:59:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:59:43.602773576 +0000 UTC m=+194.150136959" watchObservedRunningTime="2026-01-30 16:59:43.603452097 +0000 UTC m=+194.150815480" Jan 30 16:59:44 crc kubenswrapper[4875]: I0130 16:59:44.595265 4875 generic.go:334] "Generic (PLEG): container finished" podID="9451bc0a-d812-4ab9-b7b5-e9e5f8052141" containerID="9de6c5d67365e4d48aec046948cd01bc6b8a9114fd0c2f26e0dfe0c2c3be5264" exitCode=0 Jan 30 16:59:44 crc kubenswrapper[4875]: I0130 16:59:44.595340 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9451bc0a-d812-4ab9-b7b5-e9e5f8052141","Type":"ContainerDied","Data":"9de6c5d67365e4d48aec046948cd01bc6b8a9114fd0c2f26e0dfe0c2c3be5264"} Jan 30 16:59:45 crc kubenswrapper[4875]: I0130 16:59:45.862262 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:59:46 crc kubenswrapper[4875]: I0130 16:59:46.014209 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9451bc0a-d812-4ab9-b7b5-e9e5f8052141-kube-api-access\") pod \"9451bc0a-d812-4ab9-b7b5-e9e5f8052141\" (UID: \"9451bc0a-d812-4ab9-b7b5-e9e5f8052141\") " Jan 30 16:59:46 crc kubenswrapper[4875]: I0130 16:59:46.014262 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9451bc0a-d812-4ab9-b7b5-e9e5f8052141-kubelet-dir\") pod \"9451bc0a-d812-4ab9-b7b5-e9e5f8052141\" (UID: \"9451bc0a-d812-4ab9-b7b5-e9e5f8052141\") " Jan 30 16:59:46 crc kubenswrapper[4875]: I0130 16:59:46.014497 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9451bc0a-d812-4ab9-b7b5-e9e5f8052141-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9451bc0a-d812-4ab9-b7b5-e9e5f8052141" (UID: "9451bc0a-d812-4ab9-b7b5-e9e5f8052141"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:59:46 crc kubenswrapper[4875]: I0130 16:59:46.022429 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9451bc0a-d812-4ab9-b7b5-e9e5f8052141-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9451bc0a-d812-4ab9-b7b5-e9e5f8052141" (UID: "9451bc0a-d812-4ab9-b7b5-e9e5f8052141"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:59:46 crc kubenswrapper[4875]: I0130 16:59:46.115877 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9451bc0a-d812-4ab9-b7b5-e9e5f8052141-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:46 crc kubenswrapper[4875]: I0130 16:59:46.115931 4875 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9451bc0a-d812-4ab9-b7b5-e9e5f8052141-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:46 crc kubenswrapper[4875]: I0130 16:59:46.605714 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9451bc0a-d812-4ab9-b7b5-e9e5f8052141","Type":"ContainerDied","Data":"c2fe45a83dc11f94d0e66c3347ff85a61153accbca5849817014d81274e11ec4"} Jan 30 16:59:46 crc kubenswrapper[4875]: I0130 16:59:46.605767 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2fe45a83dc11f94d0e66c3347ff85a61153accbca5849817014d81274e11ec4" Jan 30 16:59:46 crc kubenswrapper[4875]: I0130 16:59:46.605819 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.422064 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 16:59:48 crc kubenswrapper[4875]: E0130 16:59:48.422254 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9451bc0a-d812-4ab9-b7b5-e9e5f8052141" containerName="pruner" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.422265 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="9451bc0a-d812-4ab9-b7b5-e9e5f8052141" containerName="pruner" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.422382 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="9451bc0a-d812-4ab9-b7b5-e9e5f8052141" containerName="pruner" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.422837 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.428078 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.428334 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.428836 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.444172 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d957892e-e8ab-4817-8690-7cb2613af5af-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d957892e-e8ab-4817-8690-7cb2613af5af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.444207 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d957892e-e8ab-4817-8690-7cb2613af5af-kube-api-access\") pod \"installer-9-crc\" (UID: \"d957892e-e8ab-4817-8690-7cb2613af5af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.444229 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d957892e-e8ab-4817-8690-7cb2613af5af-var-lock\") pod \"installer-9-crc\" (UID: \"d957892e-e8ab-4817-8690-7cb2613af5af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.545009 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d957892e-e8ab-4817-8690-7cb2613af5af-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d957892e-e8ab-4817-8690-7cb2613af5af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.545058 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d957892e-e8ab-4817-8690-7cb2613af5af-kube-api-access\") pod \"installer-9-crc\" (UID: \"d957892e-e8ab-4817-8690-7cb2613af5af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.545080 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d957892e-e8ab-4817-8690-7cb2613af5af-var-lock\") pod \"installer-9-crc\" (UID: \"d957892e-e8ab-4817-8690-7cb2613af5af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.545168 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d957892e-e8ab-4817-8690-7cb2613af5af-var-lock\") pod \"installer-9-crc\" (UID: \"d957892e-e8ab-4817-8690-7cb2613af5af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.545216 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d957892e-e8ab-4817-8690-7cb2613af5af-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d957892e-e8ab-4817-8690-7cb2613af5af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.561494 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d957892e-e8ab-4817-8690-7cb2613af5af-kube-api-access\") pod \"installer-9-crc\" (UID: \"d957892e-e8ab-4817-8690-7cb2613af5af\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.745992 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.885983 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:59:48 crc kubenswrapper[4875]: I0130 16:59:48.886314 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:59:49 crc kubenswrapper[4875]: I0130 16:59:49.008981 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:59:49 crc kubenswrapper[4875]: I0130 16:59:49.136822 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 16:59:49 crc kubenswrapper[4875]: I0130 16:59:49.634829 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwt6q" event={"ID":"6891de92-f1af-4dcc-bc97-c2a2a647515b","Type":"ContainerDied","Data":"b162c4a4993b2b09a2b929a70ed66ebbc075f9d93129ce7907baa74c43709314"} Jan 30 16:59:49 crc kubenswrapper[4875]: I0130 16:59:49.634765 4875 generic.go:334] "Generic (PLEG): container finished" podID="6891de92-f1af-4dcc-bc97-c2a2a647515b" containerID="b162c4a4993b2b09a2b929a70ed66ebbc075f9d93129ce7907baa74c43709314" exitCode=0 Jan 30 16:59:49 crc kubenswrapper[4875]: I0130 16:59:49.637622 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d957892e-e8ab-4817-8690-7cb2613af5af","Type":"ContainerStarted","Data":"5ed87c912071597cb67b0845d1975d6ce62087ae18c5294af7282f924e8412a7"} Jan 30 16:59:49 crc kubenswrapper[4875]: I0130 16:59:49.637665 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d957892e-e8ab-4817-8690-7cb2613af5af","Type":"ContainerStarted","Data":"56ec36140678933d455c65e974dd23a99c72d923d7bccdc8f66618e3139f9f7e"} Jan 30 16:59:49 crc kubenswrapper[4875]: I0130 16:59:49.668265 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.6682475220000001 podStartE2EDuration="1.668247522s" podCreationTimestamp="2026-01-30 16:59:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:59:49.666111919 +0000 UTC m=+200.213475302" watchObservedRunningTime="2026-01-30 16:59:49.668247522 +0000 UTC m=+200.215610905" Jan 30 16:59:49 crc kubenswrapper[4875]: I0130 16:59:49.697286 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 16:59:50 crc kubenswrapper[4875]: I0130 16:59:50.644420 4875 generic.go:334] "Generic (PLEG): container finished" podID="438bec48-3499-4e88-b9f1-cfb1126424ad" containerID="8d77807776aa532178722af7c5109fbf353c315afb91ee0d04f8e3bbabfd03b4" exitCode=0 Jan 30 16:59:50 crc kubenswrapper[4875]: I0130 16:59:50.644505 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7544f" event={"ID":"438bec48-3499-4e88-b9f1-cfb1126424ad","Type":"ContainerDied","Data":"8d77807776aa532178722af7c5109fbf353c315afb91ee0d04f8e3bbabfd03b4"} Jan 30 16:59:50 crc kubenswrapper[4875]: I0130 16:59:50.647220 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwt6q" event={"ID":"6891de92-f1af-4dcc-bc97-c2a2a647515b","Type":"ContainerStarted","Data":"73f217e83c2f728acd0f6dc0a753dafc32208e19812dd448cb82fc34bcf1d82d"} Jan 30 16:59:50 crc kubenswrapper[4875]: I0130 16:59:50.677111 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vwt6q" podStartSLOduration=2.781708161 podStartE2EDuration="51.677095661s" podCreationTimestamp="2026-01-30 16:58:59 +0000 UTC" firstStartedPulling="2026-01-30 16:59:01.238182932 +0000 UTC m=+151.785546305" lastFinishedPulling="2026-01-30 16:59:50.133570422 +0000 UTC m=+200.680933805" observedRunningTime="2026-01-30 16:59:50.674008955 +0000 UTC m=+201.221372338" watchObservedRunningTime="2026-01-30 16:59:50.677095661 +0000 UTC m=+201.224459044" Jan 30 16:59:51 crc kubenswrapper[4875]: I0130 16:59:51.653596 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7544f" event={"ID":"438bec48-3499-4e88-b9f1-cfb1126424ad","Type":"ContainerStarted","Data":"0f8a00fed494e0a5509d2d10b8c5dc1480faa84d7da571222257f8e452c78291"} Jan 30 16:59:51 crc kubenswrapper[4875]: I0130 16:59:51.655345 4875 generic.go:334] "Generic (PLEG): container finished" podID="228882df-4f66-4157-836b-f95a581fe216" containerID="0d9f5aac4bbbcb2b7f130433f8aa9a7800968093d28beea56dcc5a3068e7b2b8" exitCode=0 Jan 30 16:59:51 crc kubenswrapper[4875]: I0130 16:59:51.655379 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz4fm" event={"ID":"228882df-4f66-4157-836b-f95a581fe216","Type":"ContainerDied","Data":"0d9f5aac4bbbcb2b7f130433f8aa9a7800968093d28beea56dcc5a3068e7b2b8"} Jan 30 16:59:51 crc kubenswrapper[4875]: I0130 16:59:51.677203 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7544f" podStartSLOduration=2.530585108 podStartE2EDuration="51.67718641s" podCreationTimestamp="2026-01-30 16:59:00 +0000 UTC" firstStartedPulling="2026-01-30 16:59:02.285778469 +0000 UTC m=+152.833141852" lastFinishedPulling="2026-01-30 16:59:51.432379771 +0000 UTC m=+201.979743154" observedRunningTime="2026-01-30 16:59:51.672895904 +0000 UTC m=+202.220259297" watchObservedRunningTime="2026-01-30 16:59:51.67718641 +0000 UTC m=+202.224549793" Jan 30 16:59:52 crc kubenswrapper[4875]: I0130 16:59:52.662377 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gqh" event={"ID":"67e9dfb9-b895-42da-9d5d-083ffb98fc19","Type":"ContainerStarted","Data":"e8892d750bb2eecd5a4354b3f49f93df613be8cd32e2ccd9e13dbc135ff396c9"} Jan 30 16:59:52 crc kubenswrapper[4875]: I0130 16:59:52.664354 4875 generic.go:334] "Generic (PLEG): container finished" podID="87c78ecd-3fa5-40a9-ac0d-25449555b524" containerID="c8d0e91b2a453c24666efee49448deb687eb611b4a37cf4b54a202c171107e91" exitCode=0 Jan 30 16:59:52 crc kubenswrapper[4875]: I0130 16:59:52.664445 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd4tv" event={"ID":"87c78ecd-3fa5-40a9-ac0d-25449555b524","Type":"ContainerDied","Data":"c8d0e91b2a453c24666efee49448deb687eb611b4a37cf4b54a202c171107e91"} Jan 30 16:59:52 crc kubenswrapper[4875]: I0130 16:59:52.667001 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz4fm" event={"ID":"228882df-4f66-4157-836b-f95a581fe216","Type":"ContainerStarted","Data":"696dd21d16745f52db3620914e5571a3535ace0bf7a5c04218ef2409b522877e"} Jan 30 16:59:52 crc kubenswrapper[4875]: I0130 16:59:52.698142 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tz4fm" podStartSLOduration=2.737035465 podStartE2EDuration="54.698125071s" podCreationTimestamp="2026-01-30 16:58:58 +0000 UTC" firstStartedPulling="2026-01-30 16:59:00.096810169 +0000 UTC m=+150.644173552" lastFinishedPulling="2026-01-30 16:59:52.057899775 +0000 UTC m=+202.605263158" observedRunningTime="2026-01-30 16:59:52.697415207 +0000 UTC m=+203.244778590" watchObservedRunningTime="2026-01-30 16:59:52.698125071 +0000 UTC m=+203.245488444" Jan 30 16:59:53 crc kubenswrapper[4875]: I0130 16:59:53.675089 4875 generic.go:334] "Generic (PLEG): container finished" podID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" containerID="e8892d750bb2eecd5a4354b3f49f93df613be8cd32e2ccd9e13dbc135ff396c9" exitCode=0 Jan 30 16:59:53 crc kubenswrapper[4875]: I0130 16:59:53.675187 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gqh" event={"ID":"67e9dfb9-b895-42da-9d5d-083ffb98fc19","Type":"ContainerDied","Data":"e8892d750bb2eecd5a4354b3f49f93df613be8cd32e2ccd9e13dbc135ff396c9"} Jan 30 16:59:53 crc kubenswrapper[4875]: I0130 16:59:53.679170 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd4tv" event={"ID":"87c78ecd-3fa5-40a9-ac0d-25449555b524","Type":"ContainerStarted","Data":"96678d64fc43b0136ff29bc837c3057c9738314ed8145022426a99eb3afbbc4f"} Jan 30 16:59:53 crc kubenswrapper[4875]: I0130 16:59:53.711738 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sd4tv" podStartSLOduration=3.902757224 podStartE2EDuration="55.711717812s" podCreationTimestamp="2026-01-30 16:58:58 +0000 UTC" firstStartedPulling="2026-01-30 16:59:01.261876136 +0000 UTC m=+151.809239519" lastFinishedPulling="2026-01-30 16:59:53.070836724 +0000 UTC m=+203.618200107" observedRunningTime="2026-01-30 16:59:53.710335224 +0000 UTC m=+204.257698607" watchObservedRunningTime="2026-01-30 16:59:53.711717812 +0000 UTC m=+204.259081195" Jan 30 16:59:54 crc kubenswrapper[4875]: I0130 16:59:54.685030 4875 generic.go:334] "Generic (PLEG): container finished" podID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" containerID="949ab4631216a9b322aadf65137947c5a7c644b50e4419e1694c09a9ba1cd2be" exitCode=0 Jan 30 16:59:54 crc kubenswrapper[4875]: I0130 16:59:54.685067 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7g2d" event={"ID":"926bc7fe-7fc5-4f59-b161-f32ff75b40b3","Type":"ContainerDied","Data":"949ab4631216a9b322aadf65137947c5a7c644b50e4419e1694c09a9ba1cd2be"} Jan 30 16:59:55 crc kubenswrapper[4875]: I0130 16:59:55.691641 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gqh" event={"ID":"67e9dfb9-b895-42da-9d5d-083ffb98fc19","Type":"ContainerStarted","Data":"bc718a5f6f2af173ce2cc82e12a9781cecb0e86c64c833777657f4ec20ab10fe"} Jan 30 16:59:55 crc kubenswrapper[4875]: I0130 16:59:55.693155 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7g2d" event={"ID":"926bc7fe-7fc5-4f59-b161-f32ff75b40b3","Type":"ContainerStarted","Data":"9ade1006ca5e6d053aefdb283989416b13952322f4a8556e24ff26e640f3d6a5"} Jan 30 16:59:55 crc kubenswrapper[4875]: I0130 16:59:55.711791 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j4gqh" podStartSLOduration=1.684362347 podStartE2EDuration="53.711775877s" podCreationTimestamp="2026-01-30 16:59:02 +0000 UTC" firstStartedPulling="2026-01-30 16:59:03.304671697 +0000 UTC m=+153.852035080" lastFinishedPulling="2026-01-30 16:59:55.332085227 +0000 UTC m=+205.879448610" observedRunningTime="2026-01-30 16:59:55.711479397 +0000 UTC m=+206.258842790" watchObservedRunningTime="2026-01-30 16:59:55.711775877 +0000 UTC m=+206.259139260" Jan 30 16:59:56 crc kubenswrapper[4875]: I0130 16:59:56.287707 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:59:56 crc kubenswrapper[4875]: I0130 16:59:56.288185 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:59:56 crc kubenswrapper[4875]: I0130 16:59:56.288431 4875 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 16:59:56 crc kubenswrapper[4875]: I0130 16:59:56.289400 4875 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69"} pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:59:56 crc kubenswrapper[4875]: I0130 16:59:56.289737 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" containerID="cri-o://5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69" gracePeriod=600 Jan 30 16:59:56 crc kubenswrapper[4875]: I0130 16:59:56.699916 4875 generic.go:334] "Generic (PLEG): container finished" podID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerID="5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69" exitCode=0 Jan 30 16:59:56 crc kubenswrapper[4875]: I0130 16:59:56.700004 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerDied","Data":"5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69"} Jan 30 16:59:56 crc kubenswrapper[4875]: I0130 16:59:56.702473 4875 generic.go:334] "Generic (PLEG): container finished" podID="598755be-9785-4050-aa29-1904ae17e4c8" containerID="07bd1c933e76af6b17634404371a3f008332df6ae61a3ef30588cce4babcd7f9" exitCode=0 Jan 30 16:59:56 crc kubenswrapper[4875]: I0130 16:59:56.702576 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgs4k" event={"ID":"598755be-9785-4050-aa29-1904ae17e4c8","Type":"ContainerDied","Data":"07bd1c933e76af6b17634404371a3f008332df6ae61a3ef30588cce4babcd7f9"} Jan 30 16:59:56 crc kubenswrapper[4875]: I0130 16:59:56.721623 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p7g2d" podStartSLOduration=3.5269950039999998 podStartE2EDuration="55.72160525s" podCreationTimestamp="2026-01-30 16:59:01 +0000 UTC" firstStartedPulling="2026-01-30 16:59:03.296104041 +0000 UTC m=+153.843467424" lastFinishedPulling="2026-01-30 16:59:55.490714287 +0000 UTC m=+206.038077670" observedRunningTime="2026-01-30 16:59:55.732515405 +0000 UTC m=+206.279878778" watchObservedRunningTime="2026-01-30 16:59:56.72160525 +0000 UTC m=+207.268968643" Jan 30 16:59:57 crc kubenswrapper[4875]: I0130 16:59:57.709875 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerStarted","Data":"12371742fd50f0efbcda52c6975077df5a1e419df1f9382a50ead1f6472b0960"} Jan 30 16:59:58 crc kubenswrapper[4875]: I0130 16:59:58.716547 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgs4k" event={"ID":"598755be-9785-4050-aa29-1904ae17e4c8","Type":"ContainerStarted","Data":"9297c117c4a4d70a3229075904efb67489a95d0b63f5a733542d8f387bff6f45"} Jan 30 16:59:58 crc kubenswrapper[4875]: I0130 16:59:58.731683 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fgs4k" podStartSLOduration=3.613175298 podStartE2EDuration="57.731665627s" podCreationTimestamp="2026-01-30 16:59:01 +0000 UTC" firstStartedPulling="2026-01-30 16:59:03.332900352 +0000 UTC m=+153.880263735" lastFinishedPulling="2026-01-30 16:59:57.451390681 +0000 UTC m=+207.998754064" observedRunningTime="2026-01-30 16:59:58.730370813 +0000 UTC m=+209.277734206" watchObservedRunningTime="2026-01-30 16:59:58.731665627 +0000 UTC m=+209.279029010" Jan 30 16:59:58 crc kubenswrapper[4875]: I0130 16:59:58.877068 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78b8d4c749-cxpcj"] Jan 30 16:59:58 crc kubenswrapper[4875]: I0130 16:59:58.877329 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" podUID="7d65bf20-1460-416c-84db-c69ee083a4c6" containerName="controller-manager" containerID="cri-o://3f7dbcd3d6b7dd0c4340622ea9a0971a265828b0055f000473119bbb5aaceb11" gracePeriod=30 Jan 30 16:59:58 crc kubenswrapper[4875]: I0130 16:59:58.902638 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb"] Jan 30 16:59:58 crc kubenswrapper[4875]: I0130 16:59:58.902876 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" podUID="f5d68360-5372-4bc8-a494-6370581cefd1" containerName="route-controller-manager" containerID="cri-o://c18550abf994f90988712214569049c844af2b158020ecf4fddb1d5fce4f161c" gracePeriod=30 Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.086969 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.087512 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.160879 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.322857 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.322948 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.371531 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.434699 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.475497 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.475560 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.563221 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.570350 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.620757 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99gzg\" (UniqueName: \"kubernetes.io/projected/f5d68360-5372-4bc8-a494-6370581cefd1-kube-api-access-99gzg\") pod \"f5d68360-5372-4bc8-a494-6370581cefd1\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.620843 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5d68360-5372-4bc8-a494-6370581cefd1-config\") pod \"f5d68360-5372-4bc8-a494-6370581cefd1\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.621011 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5d68360-5372-4bc8-a494-6370581cefd1-client-ca\") pod \"f5d68360-5372-4bc8-a494-6370581cefd1\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.621036 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5d68360-5372-4bc8-a494-6370581cefd1-serving-cert\") pod \"f5d68360-5372-4bc8-a494-6370581cefd1\" (UID: \"f5d68360-5372-4bc8-a494-6370581cefd1\") " Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.621810 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5d68360-5372-4bc8-a494-6370581cefd1-client-ca" (OuterVolumeSpecName: "client-ca") pod "f5d68360-5372-4bc8-a494-6370581cefd1" (UID: "f5d68360-5372-4bc8-a494-6370581cefd1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.621848 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5d68360-5372-4bc8-a494-6370581cefd1-config" (OuterVolumeSpecName: "config") pod "f5d68360-5372-4bc8-a494-6370581cefd1" (UID: "f5d68360-5372-4bc8-a494-6370581cefd1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.628672 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d68360-5372-4bc8-a494-6370581cefd1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f5d68360-5372-4bc8-a494-6370581cefd1" (UID: "f5d68360-5372-4bc8-a494-6370581cefd1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.630354 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5d68360-5372-4bc8-a494-6370581cefd1-kube-api-access-99gzg" (OuterVolumeSpecName: "kube-api-access-99gzg") pod "f5d68360-5372-4bc8-a494-6370581cefd1" (UID: "f5d68360-5372-4bc8-a494-6370581cefd1"). InnerVolumeSpecName "kube-api-access-99gzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.730919 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-client-ca\") pod \"7d65bf20-1460-416c-84db-c69ee083a4c6\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.731052 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-config\") pod \"7d65bf20-1460-416c-84db-c69ee083a4c6\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.731125 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csx5c\" (UniqueName: \"kubernetes.io/projected/7d65bf20-1460-416c-84db-c69ee083a4c6-kube-api-access-csx5c\") pod \"7d65bf20-1460-416c-84db-c69ee083a4c6\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.731210 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-proxy-ca-bundles\") pod \"7d65bf20-1460-416c-84db-c69ee083a4c6\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.731341 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d65bf20-1460-416c-84db-c69ee083a4c6-serving-cert\") pod \"7d65bf20-1460-416c-84db-c69ee083a4c6\" (UID: \"7d65bf20-1460-416c-84db-c69ee083a4c6\") " Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.731947 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99gzg\" (UniqueName: \"kubernetes.io/projected/f5d68360-5372-4bc8-a494-6370581cefd1-kube-api-access-99gzg\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.731968 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5d68360-5372-4bc8-a494-6370581cefd1-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.732003 4875 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5d68360-5372-4bc8-a494-6370581cefd1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.732017 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5d68360-5372-4bc8-a494-6370581cefd1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.735762 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d65bf20-1460-416c-84db-c69ee083a4c6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d65bf20-1460-416c-84db-c69ee083a4c6" (UID: "7d65bf20-1460-416c-84db-c69ee083a4c6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.736389 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d65bf20-1460-416c-84db-c69ee083a4c6" (UID: "7d65bf20-1460-416c-84db-c69ee083a4c6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.737645 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-config" (OuterVolumeSpecName: "config") pod "7d65bf20-1460-416c-84db-c69ee083a4c6" (UID: "7d65bf20-1460-416c-84db-c69ee083a4c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.738052 4875 generic.go:334] "Generic (PLEG): container finished" podID="f5d68360-5372-4bc8-a494-6370581cefd1" containerID="c18550abf994f90988712214569049c844af2b158020ecf4fddb1d5fce4f161c" exitCode=0 Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.738081 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" event={"ID":"f5d68360-5372-4bc8-a494-6370581cefd1","Type":"ContainerDied","Data":"c18550abf994f90988712214569049c844af2b158020ecf4fddb1d5fce4f161c"} Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.738166 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" event={"ID":"f5d68360-5372-4bc8-a494-6370581cefd1","Type":"ContainerDied","Data":"58693aa1a50b7be5e71498ddc0c43992a47d123548f38afee728917c6456e51e"} Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.738192 4875 scope.go:117] "RemoveContainer" containerID="c18550abf994f90988712214569049c844af2b158020ecf4fddb1d5fce4f161c" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.738194 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.740981 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" event={"ID":"7d65bf20-1460-416c-84db-c69ee083a4c6","Type":"ContainerDied","Data":"3f7dbcd3d6b7dd0c4340622ea9a0971a265828b0055f000473119bbb5aaceb11"} Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.741353 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.743918 4875 generic.go:334] "Generic (PLEG): container finished" podID="7d65bf20-1460-416c-84db-c69ee083a4c6" containerID="3f7dbcd3d6b7dd0c4340622ea9a0971a265828b0055f000473119bbb5aaceb11" exitCode=0 Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.744290 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78b8d4c749-cxpcj" event={"ID":"7d65bf20-1460-416c-84db-c69ee083a4c6","Type":"ContainerDied","Data":"fe5b4d7e33cd8faeae01576e8f59122bc6e7ce9fa6d65801962522601090722a"} Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.746719 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d65bf20-1460-416c-84db-c69ee083a4c6-kube-api-access-csx5c" (OuterVolumeSpecName: "kube-api-access-csx5c") pod "7d65bf20-1460-416c-84db-c69ee083a4c6" (UID: "7d65bf20-1460-416c-84db-c69ee083a4c6"). InnerVolumeSpecName "kube-api-access-csx5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.747033 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7d65bf20-1460-416c-84db-c69ee083a4c6" (UID: "7d65bf20-1460-416c-84db-c69ee083a4c6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.762343 4875 scope.go:117] "RemoveContainer" containerID="c18550abf994f90988712214569049c844af2b158020ecf4fddb1d5fce4f161c" Jan 30 16:59:59 crc kubenswrapper[4875]: E0130 16:59:59.763336 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c18550abf994f90988712214569049c844af2b158020ecf4fddb1d5fce4f161c\": container with ID starting with c18550abf994f90988712214569049c844af2b158020ecf4fddb1d5fce4f161c not found: ID does not exist" containerID="c18550abf994f90988712214569049c844af2b158020ecf4fddb1d5fce4f161c" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.763384 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c18550abf994f90988712214569049c844af2b158020ecf4fddb1d5fce4f161c"} err="failed to get container status \"c18550abf994f90988712214569049c844af2b158020ecf4fddb1d5fce4f161c\": rpc error: code = NotFound desc = could not find container \"c18550abf994f90988712214569049c844af2b158020ecf4fddb1d5fce4f161c\": container with ID starting with c18550abf994f90988712214569049c844af2b158020ecf4fddb1d5fce4f161c not found: ID does not exist" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.763415 4875 scope.go:117] "RemoveContainer" containerID="3f7dbcd3d6b7dd0c4340622ea9a0971a265828b0055f000473119bbb5aaceb11" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.784922 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb"] Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.789386 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d57d459b7-d8qrb"] Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.795194 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sd4tv" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.795276 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vwt6q" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.799099 4875 scope.go:117] "RemoveContainer" containerID="3f7dbcd3d6b7dd0c4340622ea9a0971a265828b0055f000473119bbb5aaceb11" Jan 30 16:59:59 crc kubenswrapper[4875]: E0130 16:59:59.799893 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f7dbcd3d6b7dd0c4340622ea9a0971a265828b0055f000473119bbb5aaceb11\": container with ID starting with 3f7dbcd3d6b7dd0c4340622ea9a0971a265828b0055f000473119bbb5aaceb11 not found: ID does not exist" containerID="3f7dbcd3d6b7dd0c4340622ea9a0971a265828b0055f000473119bbb5aaceb11" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.799960 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f7dbcd3d6b7dd0c4340622ea9a0971a265828b0055f000473119bbb5aaceb11"} err="failed to get container status \"3f7dbcd3d6b7dd0c4340622ea9a0971a265828b0055f000473119bbb5aaceb11\": rpc error: code = NotFound desc = could not find container \"3f7dbcd3d6b7dd0c4340622ea9a0971a265828b0055f000473119bbb5aaceb11\": container with ID starting with 3f7dbcd3d6b7dd0c4340622ea9a0971a265828b0055f000473119bbb5aaceb11 not found: ID does not exist" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.811010 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.832522 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d65bf20-1460-416c-84db-c69ee083a4c6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.832566 4875 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.832579 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.832610 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csx5c\" (UniqueName: \"kubernetes.io/projected/7d65bf20-1460-416c-84db-c69ee083a4c6-kube-api-access-csx5c\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:59 crc kubenswrapper[4875]: I0130 16:59:59.832621 4875 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d65bf20-1460-416c-84db-c69ee083a4c6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.068277 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78b8d4c749-cxpcj"] Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.069749 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-78b8d4c749-cxpcj"] Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.142771 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d65bf20-1460-416c-84db-c69ee083a4c6" path="/var/lib/kubelet/pods/7d65bf20-1460-416c-84db-c69ee083a4c6/volumes" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.143391 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5d68360-5372-4bc8-a494-6370581cefd1" path="/var/lib/kubelet/pods/f5d68360-5372-4bc8-a494-6370581cefd1/volumes" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.143860 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr"] Jan 30 17:00:00 crc kubenswrapper[4875]: E0130 17:00:00.144142 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d68360-5372-4bc8-a494-6370581cefd1" containerName="route-controller-manager" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.144162 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d68360-5372-4bc8-a494-6370581cefd1" containerName="route-controller-manager" Jan 30 17:00:00 crc kubenswrapper[4875]: E0130 17:00:00.144176 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d65bf20-1460-416c-84db-c69ee083a4c6" containerName="controller-manager" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.144186 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d65bf20-1460-416c-84db-c69ee083a4c6" containerName="controller-manager" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.144301 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d68360-5372-4bc8-a494-6370581cefd1" containerName="route-controller-manager" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.144309 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d65bf20-1460-416c-84db-c69ee083a4c6" containerName="controller-manager" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.144889 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.146918 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.152036 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr"] Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.152674 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.337565 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83d856a2-4b52-431d-9ef1-d06ce610b7c1-config-volume\") pod \"collect-profiles-29496540-9kthr\" (UID: \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.337630 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-247z4\" (UniqueName: \"kubernetes.io/projected/83d856a2-4b52-431d-9ef1-d06ce610b7c1-kube-api-access-247z4\") pod \"collect-profiles-29496540-9kthr\" (UID: \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.337813 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83d856a2-4b52-431d-9ef1-d06ce610b7c1-secret-volume\") pod \"collect-profiles-29496540-9kthr\" (UID: \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.421229 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4"] Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.422280 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.424550 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.424879 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.425028 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.425780 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.426211 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.426449 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7597544687-scv9p"] Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.426739 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.427956 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.430062 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.430297 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.430302 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.430999 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.431604 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.431739 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.433175 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7597544687-scv9p"] Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.438489 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4"] Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.439394 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f74a1a2-7858-49c3-bb89-2c209bfefb32-client-ca\") pod \"route-controller-manager-64586c844b-jvmq4\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.439449 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83d856a2-4b52-431d-9ef1-d06ce610b7c1-config-volume\") pod \"collect-profiles-29496540-9kthr\" (UID: \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.439472 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-247z4\" (UniqueName: \"kubernetes.io/projected/83d856a2-4b52-431d-9ef1-d06ce610b7c1-kube-api-access-247z4\") pod \"collect-profiles-29496540-9kthr\" (UID: \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.439491 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-proxy-ca-bundles\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.439509 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5np2p\" (UniqueName: \"kubernetes.io/projected/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-kube-api-access-5np2p\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.439528 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-serving-cert\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.439555 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f74a1a2-7858-49c3-bb89-2c209bfefb32-config\") pod \"route-controller-manager-64586c844b-jvmq4\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.439597 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8zm\" (UniqueName: \"kubernetes.io/projected/5f74a1a2-7858-49c3-bb89-2c209bfefb32-kube-api-access-tb8zm\") pod \"route-controller-manager-64586c844b-jvmq4\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.439615 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-config\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.439636 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-client-ca\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.439670 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f74a1a2-7858-49c3-bb89-2c209bfefb32-serving-cert\") pod \"route-controller-manager-64586c844b-jvmq4\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.439706 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83d856a2-4b52-431d-9ef1-d06ce610b7c1-secret-volume\") pod \"collect-profiles-29496540-9kthr\" (UID: \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.440350 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.441064 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83d856a2-4b52-431d-9ef1-d06ce610b7c1-config-volume\") pod \"collect-profiles-29496540-9kthr\" (UID: \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.446515 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83d856a2-4b52-431d-9ef1-d06ce610b7c1-secret-volume\") pod \"collect-profiles-29496540-9kthr\" (UID: \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.464220 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-247z4\" (UniqueName: \"kubernetes.io/projected/83d856a2-4b52-431d-9ef1-d06ce610b7c1-kube-api-access-247z4\") pod \"collect-profiles-29496540-9kthr\" (UID: \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.541200 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f74a1a2-7858-49c3-bb89-2c209bfefb32-client-ca\") pod \"route-controller-manager-64586c844b-jvmq4\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.541269 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-proxy-ca-bundles\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.541289 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5np2p\" (UniqueName: \"kubernetes.io/projected/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-kube-api-access-5np2p\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.541310 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-serving-cert\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.541334 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f74a1a2-7858-49c3-bb89-2c209bfefb32-config\") pod \"route-controller-manager-64586c844b-jvmq4\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.541359 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8zm\" (UniqueName: \"kubernetes.io/projected/5f74a1a2-7858-49c3-bb89-2c209bfefb32-kube-api-access-tb8zm\") pod \"route-controller-manager-64586c844b-jvmq4\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.541377 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-config\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.541395 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-client-ca\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.541423 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f74a1a2-7858-49c3-bb89-2c209bfefb32-serving-cert\") pod \"route-controller-manager-64586c844b-jvmq4\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.543104 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f74a1a2-7858-49c3-bb89-2c209bfefb32-client-ca\") pod \"route-controller-manager-64586c844b-jvmq4\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.543230 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f74a1a2-7858-49c3-bb89-2c209bfefb32-config\") pod \"route-controller-manager-64586c844b-jvmq4\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.543500 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-proxy-ca-bundles\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.544488 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-serving-cert\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.544914 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-client-ca\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.547118 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f74a1a2-7858-49c3-bb89-2c209bfefb32-serving-cert\") pod \"route-controller-manager-64586c844b-jvmq4\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.558849 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-config\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.562541 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8zm\" (UniqueName: \"kubernetes.io/projected/5f74a1a2-7858-49c3-bb89-2c209bfefb32-kube-api-access-tb8zm\") pod \"route-controller-manager-64586c844b-jvmq4\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.563887 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5np2p\" (UniqueName: \"kubernetes.io/projected/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-kube-api-access-5np2p\") pod \"controller-manager-7597544687-scv9p\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.740669 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.763441 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.800507 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:00 crc kubenswrapper[4875]: I0130 17:00:00.988138 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4"] Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.032138 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.032468 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.101680 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.170160 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tz4fm"] Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.298630 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr"] Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.323990 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7597544687-scv9p"] Jan 30 17:00:01 crc kubenswrapper[4875]: W0130 17:00:01.329182 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5eebfb0_ab5b_40dc_9927_38e711b5eddf.slice/crio-a142d3b79d4996e7d6806548a2c438a49f231f812ba795800800e2ebb5a3b2a2 WatchSource:0}: Error finding container a142d3b79d4996e7d6806548a2c438a49f231f812ba795800800e2ebb5a3b2a2: Status 404 returned error can't find the container with id a142d3b79d4996e7d6806548a2c438a49f231f812ba795800800e2ebb5a3b2a2 Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.453986 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.456829 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.500033 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.763379 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vwt6q"] Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.765391 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" event={"ID":"83d856a2-4b52-431d-9ef1-d06ce610b7c1","Type":"ContainerStarted","Data":"0ce51d4ad61560233035e2b44777406a5637f9e467965bece154d8a991c02879"} Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.766740 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" event={"ID":"5f74a1a2-7858-49c3-bb89-2c209bfefb32","Type":"ContainerStarted","Data":"fd62dc24b0f626b8897e9a6e1d10b9c8d1894d678517f72a0369cb1c09538866"} Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.767530 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7597544687-scv9p" event={"ID":"e5eebfb0-ab5b-40dc-9927-38e711b5eddf","Type":"ContainerStarted","Data":"a142d3b79d4996e7d6806548a2c438a49f231f812ba795800800e2ebb5a3b2a2"} Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.768117 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tz4fm" podUID="228882df-4f66-4157-836b-f95a581fe216" containerName="registry-server" containerID="cri-o://696dd21d16745f52db3620914e5571a3535ace0bf7a5c04218ef2409b522877e" gracePeriod=2 Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.769298 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vwt6q" podUID="6891de92-f1af-4dcc-bc97-c2a2a647515b" containerName="registry-server" containerID="cri-o://73f217e83c2f728acd0f6dc0a753dafc32208e19812dd448cb82fc34bcf1d82d" gracePeriod=2 Jan 30 17:00:01 crc kubenswrapper[4875]: I0130 17:00:01.809462 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 17:00:01 crc kubenswrapper[4875]: E0130 17:00:01.974132 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod228882df_4f66_4157_836b_f95a581fe216.slice/crio-696dd21d16745f52db3620914e5571a3535ace0bf7a5c04218ef2409b522877e.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.059445 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.059495 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.103122 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.486765 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.486813 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.524648 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.776233 4875 generic.go:334] "Generic (PLEG): container finished" podID="228882df-4f66-4157-836b-f95a581fe216" containerID="696dd21d16745f52db3620914e5571a3535ace0bf7a5c04218ef2409b522877e" exitCode=0 Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.776292 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz4fm" event={"ID":"228882df-4f66-4157-836b-f95a581fe216","Type":"ContainerDied","Data":"696dd21d16745f52db3620914e5571a3535ace0bf7a5c04218ef2409b522877e"} Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.778342 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" event={"ID":"5f74a1a2-7858-49c3-bb89-2c209bfefb32","Type":"ContainerStarted","Data":"8f95fa8c387a04038bafb5baf3859d7c88e4d7f4a3a57caa7268e7d70164d143"} Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.778621 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.779988 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7597544687-scv9p" event={"ID":"e5eebfb0-ab5b-40dc-9927-38e711b5eddf","Type":"ContainerStarted","Data":"fc856d850074943e5fc424998ef95a978fce3daa43ec1141c314bb18a5446731"} Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.780060 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.782763 4875 generic.go:334] "Generic (PLEG): container finished" podID="6891de92-f1af-4dcc-bc97-c2a2a647515b" containerID="73f217e83c2f728acd0f6dc0a753dafc32208e19812dd448cb82fc34bcf1d82d" exitCode=0 Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.782815 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwt6q" event={"ID":"6891de92-f1af-4dcc-bc97-c2a2a647515b","Type":"ContainerDied","Data":"73f217e83c2f728acd0f6dc0a753dafc32208e19812dd448cb82fc34bcf1d82d"} Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.784892 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.786700 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.787614 4875 generic.go:334] "Generic (PLEG): container finished" podID="83d856a2-4b52-431d-9ef1-d06ce610b7c1" containerID="1e40f8355bb70d58ef4b0b3a8af0873dcd39fe87e323e21f17d469e2ddc1392d" exitCode=0 Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.787710 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" event={"ID":"83d856a2-4b52-431d-9ef1-d06ce610b7c1","Type":"ContainerDied","Data":"1e40f8355bb70d58ef4b0b3a8af0873dcd39fe87e323e21f17d469e2ddc1392d"} Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.848847 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" podStartSLOduration=4.84881509 podStartE2EDuration="4.84881509s" podCreationTimestamp="2026-01-30 16:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:00:02.80601211 +0000 UTC m=+213.353375493" watchObservedRunningTime="2026-01-30 17:00:02.84881509 +0000 UTC m=+213.396178523" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.858361 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7597544687-scv9p" podStartSLOduration=4.858340655 podStartE2EDuration="4.858340655s" podCreationTimestamp="2026-01-30 16:59:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:00:02.824491561 +0000 UTC m=+213.371854984" watchObservedRunningTime="2026-01-30 17:00:02.858340655 +0000 UTC m=+213.405704048" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.864193 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.868104 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 17:00:02 crc kubenswrapper[4875]: I0130 17:00:02.901754 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.092232 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.220107 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwt6q" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.282203 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smvmm\" (UniqueName: \"kubernetes.io/projected/228882df-4f66-4157-836b-f95a581fe216-kube-api-access-smvmm\") pod \"228882df-4f66-4157-836b-f95a581fe216\" (UID: \"228882df-4f66-4157-836b-f95a581fe216\") " Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.282296 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228882df-4f66-4157-836b-f95a581fe216-catalog-content\") pod \"228882df-4f66-4157-836b-f95a581fe216\" (UID: \"228882df-4f66-4157-836b-f95a581fe216\") " Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.282334 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228882df-4f66-4157-836b-f95a581fe216-utilities\") pod \"228882df-4f66-4157-836b-f95a581fe216\" (UID: \"228882df-4f66-4157-836b-f95a581fe216\") " Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.282448 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgpks\" (UniqueName: \"kubernetes.io/projected/6891de92-f1af-4dcc-bc97-c2a2a647515b-kube-api-access-zgpks\") pod \"6891de92-f1af-4dcc-bc97-c2a2a647515b\" (UID: \"6891de92-f1af-4dcc-bc97-c2a2a647515b\") " Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.282488 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6891de92-f1af-4dcc-bc97-c2a2a647515b-catalog-content\") pod \"6891de92-f1af-4dcc-bc97-c2a2a647515b\" (UID: \"6891de92-f1af-4dcc-bc97-c2a2a647515b\") " Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.283462 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/228882df-4f66-4157-836b-f95a581fe216-utilities" (OuterVolumeSpecName: "utilities") pod "228882df-4f66-4157-836b-f95a581fe216" (UID: "228882df-4f66-4157-836b-f95a581fe216"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.286753 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/228882df-4f66-4157-836b-f95a581fe216-kube-api-access-smvmm" (OuterVolumeSpecName: "kube-api-access-smvmm") pod "228882df-4f66-4157-836b-f95a581fe216" (UID: "228882df-4f66-4157-836b-f95a581fe216"). InnerVolumeSpecName "kube-api-access-smvmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.286895 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6891de92-f1af-4dcc-bc97-c2a2a647515b-kube-api-access-zgpks" (OuterVolumeSpecName: "kube-api-access-zgpks") pod "6891de92-f1af-4dcc-bc97-c2a2a647515b" (UID: "6891de92-f1af-4dcc-bc97-c2a2a647515b"). InnerVolumeSpecName "kube-api-access-zgpks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.335829 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/228882df-4f66-4157-836b-f95a581fe216-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "228882df-4f66-4157-836b-f95a581fe216" (UID: "228882df-4f66-4157-836b-f95a581fe216"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.345822 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6891de92-f1af-4dcc-bc97-c2a2a647515b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6891de92-f1af-4dcc-bc97-c2a2a647515b" (UID: "6891de92-f1af-4dcc-bc97-c2a2a647515b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.383926 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6891de92-f1af-4dcc-bc97-c2a2a647515b-utilities\") pod \"6891de92-f1af-4dcc-bc97-c2a2a647515b\" (UID: \"6891de92-f1af-4dcc-bc97-c2a2a647515b\") " Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.384458 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smvmm\" (UniqueName: \"kubernetes.io/projected/228882df-4f66-4157-836b-f95a581fe216-kube-api-access-smvmm\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.384480 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/228882df-4f66-4157-836b-f95a581fe216-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.384518 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/228882df-4f66-4157-836b-f95a581fe216-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.384534 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgpks\" (UniqueName: \"kubernetes.io/projected/6891de92-f1af-4dcc-bc97-c2a2a647515b-kube-api-access-zgpks\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.384546 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6891de92-f1af-4dcc-bc97-c2a2a647515b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.384778 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6891de92-f1af-4dcc-bc97-c2a2a647515b-utilities" (OuterVolumeSpecName: "utilities") pod "6891de92-f1af-4dcc-bc97-c2a2a647515b" (UID: "6891de92-f1af-4dcc-bc97-c2a2a647515b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.485829 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6891de92-f1af-4dcc-bc97-c2a2a647515b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.566475 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgs4k"] Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.795544 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz4fm" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.795542 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz4fm" event={"ID":"228882df-4f66-4157-836b-f95a581fe216","Type":"ContainerDied","Data":"d514d6c9d251927e1cff073d6d5ff1a72f7f337505999f33b0f744ca86235ca1"} Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.795654 4875 scope.go:117] "RemoveContainer" containerID="696dd21d16745f52db3620914e5571a3535ace0bf7a5c04218ef2409b522877e" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.798010 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwt6q" event={"ID":"6891de92-f1af-4dcc-bc97-c2a2a647515b","Type":"ContainerDied","Data":"e83f7b3c0f3a4610e4bb8da5c8d533c0e0e21a1fb0eaee1f68ae2dcd08c41c06"} Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.798029 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwt6q" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.816805 4875 scope.go:117] "RemoveContainer" containerID="0d9f5aac4bbbcb2b7f130433f8aa9a7800968093d28beea56dcc5a3068e7b2b8" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.836269 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tz4fm"] Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.845196 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tz4fm"] Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.850446 4875 scope.go:117] "RemoveContainer" containerID="09683f15df5d56b44e58c50fbe203960c0e8c33021dec1b7ba00aa111b8bfd70" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.852216 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vwt6q"] Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.857858 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vwt6q"] Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.865763 4875 scope.go:117] "RemoveContainer" containerID="73f217e83c2f728acd0f6dc0a753dafc32208e19812dd448cb82fc34bcf1d82d" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.887088 4875 scope.go:117] "RemoveContainer" containerID="b162c4a4993b2b09a2b929a70ed66ebbc075f9d93129ce7907baa74c43709314" Jan 30 17:00:03 crc kubenswrapper[4875]: I0130 17:00:03.901387 4875 scope.go:117] "RemoveContainer" containerID="0d825bfb3b84827511243fef8ea686dc1c9e948db583f8aa11f1d10cbc20421c" Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.012931 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.147109 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="228882df-4f66-4157-836b-f95a581fe216" path="/var/lib/kubelet/pods/228882df-4f66-4157-836b-f95a581fe216/volumes" Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.148677 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6891de92-f1af-4dcc-bc97-c2a2a647515b" path="/var/lib/kubelet/pods/6891de92-f1af-4dcc-bc97-c2a2a647515b/volumes" Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.194085 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83d856a2-4b52-431d-9ef1-d06ce610b7c1-config-volume\") pod \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\" (UID: \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\") " Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.194225 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83d856a2-4b52-431d-9ef1-d06ce610b7c1-secret-volume\") pod \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\" (UID: \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\") " Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.194352 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-247z4\" (UniqueName: \"kubernetes.io/projected/83d856a2-4b52-431d-9ef1-d06ce610b7c1-kube-api-access-247z4\") pod \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\" (UID: \"83d856a2-4b52-431d-9ef1-d06ce610b7c1\") " Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.194833 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83d856a2-4b52-431d-9ef1-d06ce610b7c1-config-volume" (OuterVolumeSpecName: "config-volume") pod "83d856a2-4b52-431d-9ef1-d06ce610b7c1" (UID: "83d856a2-4b52-431d-9ef1-d06ce610b7c1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.195037 4875 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83d856a2-4b52-431d-9ef1-d06ce610b7c1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.198135 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d856a2-4b52-431d-9ef1-d06ce610b7c1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "83d856a2-4b52-431d-9ef1-d06ce610b7c1" (UID: "83d856a2-4b52-431d-9ef1-d06ce610b7c1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.198051 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d856a2-4b52-431d-9ef1-d06ce610b7c1-kube-api-access-247z4" (OuterVolumeSpecName: "kube-api-access-247z4") pod "83d856a2-4b52-431d-9ef1-d06ce610b7c1" (UID: "83d856a2-4b52-431d-9ef1-d06ce610b7c1"). InnerVolumeSpecName "kube-api-access-247z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.296421 4875 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83d856a2-4b52-431d-9ef1-d06ce610b7c1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.296464 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-247z4\" (UniqueName: \"kubernetes.io/projected/83d856a2-4b52-431d-9ef1-d06ce610b7c1-kube-api-access-247z4\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.805689 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.805712 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-9kthr" event={"ID":"83d856a2-4b52-431d-9ef1-d06ce610b7c1","Type":"ContainerDied","Data":"0ce51d4ad61560233035e2b44777406a5637f9e467965bece154d8a991c02879"} Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.805752 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ce51d4ad61560233035e2b44777406a5637f9e467965bece154d8a991c02879" Jan 30 17:00:04 crc kubenswrapper[4875]: I0130 17:00:04.806860 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fgs4k" podUID="598755be-9785-4050-aa29-1904ae17e4c8" containerName="registry-server" containerID="cri-o://9297c117c4a4d70a3229075904efb67489a95d0b63f5a733542d8f387bff6f45" gracePeriod=2 Jan 30 17:00:06 crc kubenswrapper[4875]: I0130 17:00:06.164918 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j4gqh"] Jan 30 17:00:06 crc kubenswrapper[4875]: I0130 17:00:06.165331 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j4gqh" podUID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" containerName="registry-server" containerID="cri-o://bc718a5f6f2af173ce2cc82e12a9781cecb0e86c64c833777657f4ec20ab10fe" gracePeriod=2 Jan 30 17:00:07 crc kubenswrapper[4875]: I0130 17:00:07.823355 4875 generic.go:334] "Generic (PLEG): container finished" podID="598755be-9785-4050-aa29-1904ae17e4c8" containerID="9297c117c4a4d70a3229075904efb67489a95d0b63f5a733542d8f387bff6f45" exitCode=0 Jan 30 17:00:07 crc kubenswrapper[4875]: I0130 17:00:07.823411 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgs4k" event={"ID":"598755be-9785-4050-aa29-1904ae17e4c8","Type":"ContainerDied","Data":"9297c117c4a4d70a3229075904efb67489a95d0b63f5a733542d8f387bff6f45"} Jan 30 17:00:07 crc kubenswrapper[4875]: I0130 17:00:07.828267 4875 generic.go:334] "Generic (PLEG): container finished" podID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" containerID="bc718a5f6f2af173ce2cc82e12a9781cecb0e86c64c833777657f4ec20ab10fe" exitCode=0 Jan 30 17:00:07 crc kubenswrapper[4875]: I0130 17:00:07.828316 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gqh" event={"ID":"67e9dfb9-b895-42da-9d5d-083ffb98fc19","Type":"ContainerDied","Data":"bc718a5f6f2af173ce2cc82e12a9781cecb0e86c64c833777657f4ec20ab10fe"} Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.007352 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.049062 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhp7t\" (UniqueName: \"kubernetes.io/projected/598755be-9785-4050-aa29-1904ae17e4c8-kube-api-access-dhp7t\") pod \"598755be-9785-4050-aa29-1904ae17e4c8\" (UID: \"598755be-9785-4050-aa29-1904ae17e4c8\") " Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.049133 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598755be-9785-4050-aa29-1904ae17e4c8-catalog-content\") pod \"598755be-9785-4050-aa29-1904ae17e4c8\" (UID: \"598755be-9785-4050-aa29-1904ae17e4c8\") " Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.049264 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598755be-9785-4050-aa29-1904ae17e4c8-utilities\") pod \"598755be-9785-4050-aa29-1904ae17e4c8\" (UID: \"598755be-9785-4050-aa29-1904ae17e4c8\") " Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.050538 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/598755be-9785-4050-aa29-1904ae17e4c8-utilities" (OuterVolumeSpecName: "utilities") pod "598755be-9785-4050-aa29-1904ae17e4c8" (UID: "598755be-9785-4050-aa29-1904ae17e4c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.056062 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598755be-9785-4050-aa29-1904ae17e4c8-kube-api-access-dhp7t" (OuterVolumeSpecName: "kube-api-access-dhp7t") pod "598755be-9785-4050-aa29-1904ae17e4c8" (UID: "598755be-9785-4050-aa29-1904ae17e4c8"). InnerVolumeSpecName "kube-api-access-dhp7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.074720 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/598755be-9785-4050-aa29-1904ae17e4c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "598755be-9785-4050-aa29-1904ae17e4c8" (UID: "598755be-9785-4050-aa29-1904ae17e4c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.079556 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.150816 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62f4r\" (UniqueName: \"kubernetes.io/projected/67e9dfb9-b895-42da-9d5d-083ffb98fc19-kube-api-access-62f4r\") pod \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\" (UID: \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\") " Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.150958 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e9dfb9-b895-42da-9d5d-083ffb98fc19-catalog-content\") pod \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\" (UID: \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\") " Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.150983 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e9dfb9-b895-42da-9d5d-083ffb98fc19-utilities\") pod \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\" (UID: \"67e9dfb9-b895-42da-9d5d-083ffb98fc19\") " Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.151215 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhp7t\" (UniqueName: \"kubernetes.io/projected/598755be-9785-4050-aa29-1904ae17e4c8-kube-api-access-dhp7t\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.151227 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/598755be-9785-4050-aa29-1904ae17e4c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.151237 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/598755be-9785-4050-aa29-1904ae17e4c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.151931 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e9dfb9-b895-42da-9d5d-083ffb98fc19-utilities" (OuterVolumeSpecName: "utilities") pod "67e9dfb9-b895-42da-9d5d-083ffb98fc19" (UID: "67e9dfb9-b895-42da-9d5d-083ffb98fc19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.160343 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e9dfb9-b895-42da-9d5d-083ffb98fc19-kube-api-access-62f4r" (OuterVolumeSpecName: "kube-api-access-62f4r") pod "67e9dfb9-b895-42da-9d5d-083ffb98fc19" (UID: "67e9dfb9-b895-42da-9d5d-083ffb98fc19"). InnerVolumeSpecName "kube-api-access-62f4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.252679 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62f4r\" (UniqueName: \"kubernetes.io/projected/67e9dfb9-b895-42da-9d5d-083ffb98fc19-kube-api-access-62f4r\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.252713 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67e9dfb9-b895-42da-9d5d-083ffb98fc19-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.340114 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e9dfb9-b895-42da-9d5d-083ffb98fc19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67e9dfb9-b895-42da-9d5d-083ffb98fc19" (UID: "67e9dfb9-b895-42da-9d5d-083ffb98fc19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.354241 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67e9dfb9-b895-42da-9d5d-083ffb98fc19-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.844969 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fgs4k" event={"ID":"598755be-9785-4050-aa29-1904ae17e4c8","Type":"ContainerDied","Data":"6a54a165cb52079293b0cd605d816f19cd688b205e76311ce68786ff261a297c"} Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.845321 4875 scope.go:117] "RemoveContainer" containerID="9297c117c4a4d70a3229075904efb67489a95d0b63f5a733542d8f387bff6f45" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.846133 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fgs4k" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.849568 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j4gqh" event={"ID":"67e9dfb9-b895-42da-9d5d-083ffb98fc19","Type":"ContainerDied","Data":"eec1b745bb867e953874948726d1e6541b4137333383b1b7e6c9015dcfb84adc"} Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.849699 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j4gqh" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.864720 4875 scope.go:117] "RemoveContainer" containerID="07bd1c933e76af6b17634404371a3f008332df6ae61a3ef30588cce4babcd7f9" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.870285 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgs4k"] Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.876202 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fgs4k"] Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.887895 4875 scope.go:117] "RemoveContainer" containerID="293e51b261531556c351ed9e2f2bc4f68dac7c73c7916e27fd02324d740a0e3b" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.887953 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j4gqh"] Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.889955 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j4gqh"] Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.903770 4875 scope.go:117] "RemoveContainer" containerID="bc718a5f6f2af173ce2cc82e12a9781cecb0e86c64c833777657f4ec20ab10fe" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.915147 4875 scope.go:117] "RemoveContainer" containerID="e8892d750bb2eecd5a4354b3f49f93df613be8cd32e2ccd9e13dbc135ff396c9" Jan 30 17:00:08 crc kubenswrapper[4875]: I0130 17:00:08.930311 4875 scope.go:117] "RemoveContainer" containerID="49ded4fed9990548b8a4b3bb0cb0257946aa2f6ec8c0490251e4d379ba4bf698" Jan 30 17:00:10 crc kubenswrapper[4875]: I0130 17:00:10.143482 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598755be-9785-4050-aa29-1904ae17e4c8" path="/var/lib/kubelet/pods/598755be-9785-4050-aa29-1904ae17e4c8/volumes" Jan 30 17:00:10 crc kubenswrapper[4875]: I0130 17:00:10.144166 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" path="/var/lib/kubelet/pods/67e9dfb9-b895-42da-9d5d-083ffb98fc19/volumes" Jan 30 17:00:12 crc kubenswrapper[4875]: I0130 17:00:12.425643 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gv6jw"] Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.793644 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pdr7w"] Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.794362 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pdr7w" podUID="0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" containerName="registry-server" containerID="cri-o://2d21bd1e721cbfba8e6c1fd3bc4941ebcb69277e211dc78d389d84a935f270c6" gracePeriod=30 Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.800249 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sd4tv"] Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.800464 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sd4tv" podUID="87c78ecd-3fa5-40a9-ac0d-25449555b524" containerName="registry-server" containerID="cri-o://96678d64fc43b0136ff29bc837c3057c9738314ed8145022426a99eb3afbbc4f" gracePeriod=30 Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.813752 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6hpsd"] Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.813977 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" podUID="beaaba45-df33-4540-ab78-79f1dc92f87b" containerName="marketplace-operator" containerID="cri-o://dc68a31a351ff1c2c90c9e1fa1861fbf7af9afeefb86abfb77c1fd1d96523cc0" gracePeriod=30 Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.823877 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7544f"] Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.824093 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7544f" podUID="438bec48-3499-4e88-b9f1-cfb1126424ad" containerName="registry-server" containerID="cri-o://0f8a00fed494e0a5509d2d10b8c5dc1480faa84d7da571222257f8e452c78291" gracePeriod=30 Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.892540 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7g2d"] Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.892771 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p7g2d" podUID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" containerName="registry-server" containerID="cri-o://9ade1006ca5e6d053aefdb283989416b13952322f4a8556e24ff26e640f3d6a5" gracePeriod=30 Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.899656 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j9hxl"] Jan 30 17:00:13 crc kubenswrapper[4875]: E0130 17:00:13.900052 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228882df-4f66-4157-836b-f95a581fe216" containerName="extract-content" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.900117 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="228882df-4f66-4157-836b-f95a581fe216" containerName="extract-content" Jan 30 17:00:13 crc kubenswrapper[4875]: E0130 17:00:13.900212 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6891de92-f1af-4dcc-bc97-c2a2a647515b" containerName="extract-utilities" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.900284 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="6891de92-f1af-4dcc-bc97-c2a2a647515b" containerName="extract-utilities" Jan 30 17:00:13 crc kubenswrapper[4875]: E0130 17:00:13.900341 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598755be-9785-4050-aa29-1904ae17e4c8" containerName="registry-server" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.900405 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="598755be-9785-4050-aa29-1904ae17e4c8" containerName="registry-server" Jan 30 17:00:13 crc kubenswrapper[4875]: E0130 17:00:13.900460 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228882df-4f66-4157-836b-f95a581fe216" containerName="extract-utilities" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.900509 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="228882df-4f66-4157-836b-f95a581fe216" containerName="extract-utilities" Jan 30 17:00:13 crc kubenswrapper[4875]: E0130 17:00:13.900572 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598755be-9785-4050-aa29-1904ae17e4c8" containerName="extract-content" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.900647 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="598755be-9785-4050-aa29-1904ae17e4c8" containerName="extract-content" Jan 30 17:00:13 crc kubenswrapper[4875]: E0130 17:00:13.900700 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228882df-4f66-4157-836b-f95a581fe216" containerName="registry-server" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.900820 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="228882df-4f66-4157-836b-f95a581fe216" containerName="registry-server" Jan 30 17:00:13 crc kubenswrapper[4875]: E0130 17:00:13.900913 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" containerName="registry-server" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.900999 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" containerName="registry-server" Jan 30 17:00:13 crc kubenswrapper[4875]: E0130 17:00:13.901070 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598755be-9785-4050-aa29-1904ae17e4c8" containerName="extract-utilities" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.901129 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="598755be-9785-4050-aa29-1904ae17e4c8" containerName="extract-utilities" Jan 30 17:00:13 crc kubenswrapper[4875]: E0130 17:00:13.901192 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d856a2-4b52-431d-9ef1-d06ce610b7c1" containerName="collect-profiles" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.901252 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d856a2-4b52-431d-9ef1-d06ce610b7c1" containerName="collect-profiles" Jan 30 17:00:13 crc kubenswrapper[4875]: E0130 17:00:13.901310 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" containerName="extract-utilities" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.901367 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" containerName="extract-utilities" Jan 30 17:00:13 crc kubenswrapper[4875]: E0130 17:00:13.901428 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6891de92-f1af-4dcc-bc97-c2a2a647515b" containerName="extract-content" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.901480 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="6891de92-f1af-4dcc-bc97-c2a2a647515b" containerName="extract-content" Jan 30 17:00:13 crc kubenswrapper[4875]: E0130 17:00:13.901545 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6891de92-f1af-4dcc-bc97-c2a2a647515b" containerName="registry-server" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.901621 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="6891de92-f1af-4dcc-bc97-c2a2a647515b" containerName="registry-server" Jan 30 17:00:13 crc kubenswrapper[4875]: E0130 17:00:13.901768 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" containerName="extract-content" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.901829 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" containerName="extract-content" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.902060 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e9dfb9-b895-42da-9d5d-083ffb98fc19" containerName="registry-server" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.902142 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="6891de92-f1af-4dcc-bc97-c2a2a647515b" containerName="registry-server" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.902221 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d856a2-4b52-431d-9ef1-d06ce610b7c1" containerName="collect-profiles" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.902286 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="598755be-9785-4050-aa29-1904ae17e4c8" containerName="registry-server" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.902345 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="228882df-4f66-4157-836b-f95a581fe216" containerName="registry-server" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.902896 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" Jan 30 17:00:13 crc kubenswrapper[4875]: I0130 17:00:13.913715 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j9hxl"] Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.019048 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c92mc\" (UniqueName: \"kubernetes.io/projected/ee16d58a-dd09-48a5-aa90-2788f5bd8fa2-kube-api-access-c92mc\") pod \"marketplace-operator-79b997595-j9hxl\" (UID: \"ee16d58a-dd09-48a5-aa90-2788f5bd8fa2\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.019495 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee16d58a-dd09-48a5-aa90-2788f5bd8fa2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-j9hxl\" (UID: \"ee16d58a-dd09-48a5-aa90-2788f5bd8fa2\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.019628 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee16d58a-dd09-48a5-aa90-2788f5bd8fa2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-j9hxl\" (UID: \"ee16d58a-dd09-48a5-aa90-2788f5bd8fa2\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.122303 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee16d58a-dd09-48a5-aa90-2788f5bd8fa2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-j9hxl\" (UID: \"ee16d58a-dd09-48a5-aa90-2788f5bd8fa2\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.122352 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c92mc\" (UniqueName: \"kubernetes.io/projected/ee16d58a-dd09-48a5-aa90-2788f5bd8fa2-kube-api-access-c92mc\") pod \"marketplace-operator-79b997595-j9hxl\" (UID: \"ee16d58a-dd09-48a5-aa90-2788f5bd8fa2\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.122397 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee16d58a-dd09-48a5-aa90-2788f5bd8fa2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-j9hxl\" (UID: \"ee16d58a-dd09-48a5-aa90-2788f5bd8fa2\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.123535 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ee16d58a-dd09-48a5-aa90-2788f5bd8fa2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-j9hxl\" (UID: \"ee16d58a-dd09-48a5-aa90-2788f5bd8fa2\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.128394 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ee16d58a-dd09-48a5-aa90-2788f5bd8fa2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-j9hxl\" (UID: \"ee16d58a-dd09-48a5-aa90-2788f5bd8fa2\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.145101 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c92mc\" (UniqueName: \"kubernetes.io/projected/ee16d58a-dd09-48a5-aa90-2788f5bd8fa2-kube-api-access-c92mc\") pod \"marketplace-operator-79b997595-j9hxl\" (UID: \"ee16d58a-dd09-48a5-aa90-2788f5bd8fa2\") " pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.311026 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.332021 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.434447 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dqx5\" (UniqueName: \"kubernetes.io/projected/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-kube-api-access-9dqx5\") pod \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\" (UID: \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.434544 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-utilities\") pod \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\" (UID: \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.434600 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-catalog-content\") pod \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\" (UID: \"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.436810 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-utilities" (OuterVolumeSpecName: "utilities") pod "0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" (UID: "0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.438750 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-kube-api-access-9dqx5" (OuterVolumeSpecName: "kube-api-access-9dqx5") pod "0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" (UID: "0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79"). InnerVolumeSpecName "kube-api-access-9dqx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.465764 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.467676 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.535083 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/beaaba45-df33-4540-ab78-79f1dc92f87b-marketplace-trusted-ca\") pod \"beaaba45-df33-4540-ab78-79f1dc92f87b\" (UID: \"beaaba45-df33-4540-ab78-79f1dc92f87b\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.535124 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-utilities\") pod \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\" (UID: \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.535150 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/beaaba45-df33-4540-ab78-79f1dc92f87b-marketplace-operator-metrics\") pod \"beaaba45-df33-4540-ab78-79f1dc92f87b\" (UID: \"beaaba45-df33-4540-ab78-79f1dc92f87b\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.535177 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-catalog-content\") pod \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\" (UID: \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.535203 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqv9p\" (UniqueName: \"kubernetes.io/projected/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-kube-api-access-mqv9p\") pod \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\" (UID: \"926bc7fe-7fc5-4f59-b161-f32ff75b40b3\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.535236 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhxj6\" (UniqueName: \"kubernetes.io/projected/beaaba45-df33-4540-ab78-79f1dc92f87b-kube-api-access-lhxj6\") pod \"beaaba45-df33-4540-ab78-79f1dc92f87b\" (UID: \"beaaba45-df33-4540-ab78-79f1dc92f87b\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.535358 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.535369 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dqx5\" (UniqueName: \"kubernetes.io/projected/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-kube-api-access-9dqx5\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.536858 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-utilities" (OuterVolumeSpecName: "utilities") pod "926bc7fe-7fc5-4f59-b161-f32ff75b40b3" (UID: "926bc7fe-7fc5-4f59-b161-f32ff75b40b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.537197 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" (UID: "0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.537392 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beaaba45-df33-4540-ab78-79f1dc92f87b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "beaaba45-df33-4540-ab78-79f1dc92f87b" (UID: "beaaba45-df33-4540-ab78-79f1dc92f87b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.537950 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.539546 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beaaba45-df33-4540-ab78-79f1dc92f87b-kube-api-access-lhxj6" (OuterVolumeSpecName: "kube-api-access-lhxj6") pod "beaaba45-df33-4540-ab78-79f1dc92f87b" (UID: "beaaba45-df33-4540-ab78-79f1dc92f87b"). InnerVolumeSpecName "kube-api-access-lhxj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.541083 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-kube-api-access-mqv9p" (OuterVolumeSpecName: "kube-api-access-mqv9p") pod "926bc7fe-7fc5-4f59-b161-f32ff75b40b3" (UID: "926bc7fe-7fc5-4f59-b161-f32ff75b40b3"). InnerVolumeSpecName "kube-api-access-mqv9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.550267 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beaaba45-df33-4540-ab78-79f1dc92f87b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "beaaba45-df33-4540-ab78-79f1dc92f87b" (UID: "beaaba45-df33-4540-ab78-79f1dc92f87b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.636159 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.641068 4875 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/beaaba45-df33-4540-ab78-79f1dc92f87b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.641167 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.641241 4875 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/beaaba45-df33-4540-ab78-79f1dc92f87b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.641281 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqv9p\" (UniqueName: \"kubernetes.io/projected/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-kube-api-access-mqv9p\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.641291 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhxj6\" (UniqueName: \"kubernetes.io/projected/beaaba45-df33-4540-ab78-79f1dc92f87b-kube-api-access-lhxj6\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.712424 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "926bc7fe-7fc5-4f59-b161-f32ff75b40b3" (UID: "926bc7fe-7fc5-4f59-b161-f32ff75b40b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.744517 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/438bec48-3499-4e88-b9f1-cfb1126424ad-utilities\") pod \"438bec48-3499-4e88-b9f1-cfb1126424ad\" (UID: \"438bec48-3499-4e88-b9f1-cfb1126424ad\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.744638 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/438bec48-3499-4e88-b9f1-cfb1126424ad-catalog-content\") pod \"438bec48-3499-4e88-b9f1-cfb1126424ad\" (UID: \"438bec48-3499-4e88-b9f1-cfb1126424ad\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.744674 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cprs\" (UniqueName: \"kubernetes.io/projected/438bec48-3499-4e88-b9f1-cfb1126424ad-kube-api-access-7cprs\") pod \"438bec48-3499-4e88-b9f1-cfb1126424ad\" (UID: \"438bec48-3499-4e88-b9f1-cfb1126424ad\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.744856 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/926bc7fe-7fc5-4f59-b161-f32ff75b40b3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.745338 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/438bec48-3499-4e88-b9f1-cfb1126424ad-utilities" (OuterVolumeSpecName: "utilities") pod "438bec48-3499-4e88-b9f1-cfb1126424ad" (UID: "438bec48-3499-4e88-b9f1-cfb1126424ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.747124 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438bec48-3499-4e88-b9f1-cfb1126424ad-kube-api-access-7cprs" (OuterVolumeSpecName: "kube-api-access-7cprs") pod "438bec48-3499-4e88-b9f1-cfb1126424ad" (UID: "438bec48-3499-4e88-b9f1-cfb1126424ad"). InnerVolumeSpecName "kube-api-access-7cprs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.769861 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/438bec48-3499-4e88-b9f1-cfb1126424ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "438bec48-3499-4e88-b9f1-cfb1126424ad" (UID: "438bec48-3499-4e88-b9f1-cfb1126424ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.779889 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j9hxl"] Jan 30 17:00:14 crc kubenswrapper[4875]: W0130 17:00:14.781438 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee16d58a_dd09_48a5_aa90_2788f5bd8fa2.slice/crio-846d9c86839d6288d5a29ada307748d13e9b4995570f36a41c18fd252c1cb731 WatchSource:0}: Error finding container 846d9c86839d6288d5a29ada307748d13e9b4995570f36a41c18fd252c1cb731: Status 404 returned error can't find the container with id 846d9c86839d6288d5a29ada307748d13e9b4995570f36a41c18fd252c1cb731 Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.845820 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/438bec48-3499-4e88-b9f1-cfb1126424ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.845846 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cprs\" (UniqueName: \"kubernetes.io/projected/438bec48-3499-4e88-b9f1-cfb1126424ad-kube-api-access-7cprs\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.845857 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/438bec48-3499-4e88-b9f1-cfb1126424ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.868288 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sd4tv" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.897193 4875 generic.go:334] "Generic (PLEG): container finished" podID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" containerID="9ade1006ca5e6d053aefdb283989416b13952322f4a8556e24ff26e640f3d6a5" exitCode=0 Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.897251 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7g2d" event={"ID":"926bc7fe-7fc5-4f59-b161-f32ff75b40b3","Type":"ContainerDied","Data":"9ade1006ca5e6d053aefdb283989416b13952322f4a8556e24ff26e640f3d6a5"} Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.897277 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7g2d" event={"ID":"926bc7fe-7fc5-4f59-b161-f32ff75b40b3","Type":"ContainerDied","Data":"1ccfa79f68248134b5bc68a99f7892f553f961a1bc328ee718b7b56e45bcb4b7"} Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.897293 4875 scope.go:117] "RemoveContainer" containerID="9ade1006ca5e6d053aefdb283989416b13952322f4a8556e24ff26e640f3d6a5" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.897396 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7g2d" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.902793 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" event={"ID":"ee16d58a-dd09-48a5-aa90-2788f5bd8fa2","Type":"ContainerStarted","Data":"846d9c86839d6288d5a29ada307748d13e9b4995570f36a41c18fd252c1cb731"} Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.917140 4875 generic.go:334] "Generic (PLEG): container finished" podID="87c78ecd-3fa5-40a9-ac0d-25449555b524" containerID="96678d64fc43b0136ff29bc837c3057c9738314ed8145022426a99eb3afbbc4f" exitCode=0 Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.917222 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd4tv" event={"ID":"87c78ecd-3fa5-40a9-ac0d-25449555b524","Type":"ContainerDied","Data":"96678d64fc43b0136ff29bc837c3057c9738314ed8145022426a99eb3afbbc4f"} Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.917247 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sd4tv" event={"ID":"87c78ecd-3fa5-40a9-ac0d-25449555b524","Type":"ContainerDied","Data":"2ebbca42502f007f99e04dcc1ffa70cc4f61d768b17d9156e329cd4671c303c2"} Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.917313 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sd4tv" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.930332 4875 generic.go:334] "Generic (PLEG): container finished" podID="0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" containerID="2d21bd1e721cbfba8e6c1fd3bc4941ebcb69277e211dc78d389d84a935f270c6" exitCode=0 Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.930417 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdr7w" event={"ID":"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79","Type":"ContainerDied","Data":"2d21bd1e721cbfba8e6c1fd3bc4941ebcb69277e211dc78d389d84a935f270c6"} Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.930428 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdr7w" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.930519 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdr7w" event={"ID":"0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79","Type":"ContainerDied","Data":"d3a5a55784bbcb151c45b081c39754b8d00d9ea7792f52b7c12140ebac49a90c"} Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.933351 4875 generic.go:334] "Generic (PLEG): container finished" podID="438bec48-3499-4e88-b9f1-cfb1126424ad" containerID="0f8a00fed494e0a5509d2d10b8c5dc1480faa84d7da571222257f8e452c78291" exitCode=0 Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.933419 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7544f" event={"ID":"438bec48-3499-4e88-b9f1-cfb1126424ad","Type":"ContainerDied","Data":"0f8a00fed494e0a5509d2d10b8c5dc1480faa84d7da571222257f8e452c78291"} Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.933435 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7544f" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.933445 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7544f" event={"ID":"438bec48-3499-4e88-b9f1-cfb1126424ad","Type":"ContainerDied","Data":"da67e34deaafa6984490230b44e258a97d103f663e34c6bff452852edf260e81"} Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.940591 4875 generic.go:334] "Generic (PLEG): container finished" podID="beaaba45-df33-4540-ab78-79f1dc92f87b" containerID="dc68a31a351ff1c2c90c9e1fa1861fbf7af9afeefb86abfb77c1fd1d96523cc0" exitCode=0 Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.940647 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" event={"ID":"beaaba45-df33-4540-ab78-79f1dc92f87b","Type":"ContainerDied","Data":"dc68a31a351ff1c2c90c9e1fa1861fbf7af9afeefb86abfb77c1fd1d96523cc0"} Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.940678 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" event={"ID":"beaaba45-df33-4540-ab78-79f1dc92f87b","Type":"ContainerDied","Data":"3ced7b6e312dd7257e22f25de89481dd89ab4d8533d563d4b3471998e45c09e8"} Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.940838 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6hpsd" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.948292 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87c78ecd-3fa5-40a9-ac0d-25449555b524-utilities\") pod \"87c78ecd-3fa5-40a9-ac0d-25449555b524\" (UID: \"87c78ecd-3fa5-40a9-ac0d-25449555b524\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.948461 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87c78ecd-3fa5-40a9-ac0d-25449555b524-catalog-content\") pod \"87c78ecd-3fa5-40a9-ac0d-25449555b524\" (UID: \"87c78ecd-3fa5-40a9-ac0d-25449555b524\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.949038 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhv4j\" (UniqueName: \"kubernetes.io/projected/87c78ecd-3fa5-40a9-ac0d-25449555b524-kube-api-access-xhv4j\") pod \"87c78ecd-3fa5-40a9-ac0d-25449555b524\" (UID: \"87c78ecd-3fa5-40a9-ac0d-25449555b524\") " Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.950574 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87c78ecd-3fa5-40a9-ac0d-25449555b524-utilities" (OuterVolumeSpecName: "utilities") pod "87c78ecd-3fa5-40a9-ac0d-25449555b524" (UID: "87c78ecd-3fa5-40a9-ac0d-25449555b524"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.954765 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c78ecd-3fa5-40a9-ac0d-25449555b524-kube-api-access-xhv4j" (OuterVolumeSpecName: "kube-api-access-xhv4j") pod "87c78ecd-3fa5-40a9-ac0d-25449555b524" (UID: "87c78ecd-3fa5-40a9-ac0d-25449555b524"). InnerVolumeSpecName "kube-api-access-xhv4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.975575 4875 scope.go:117] "RemoveContainer" containerID="949ab4631216a9b322aadf65137947c5a7c644b50e4419e1694c09a9ba1cd2be" Jan 30 17:00:14 crc kubenswrapper[4875]: I0130 17:00:14.998147 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7g2d"] Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.001616 4875 scope.go:117] "RemoveContainer" containerID="b668a8d22912ceed4e6196452d7fe76d12c771c427ac3f79e290b4a04e1d73d7" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.005568 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p7g2d"] Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.008090 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6hpsd"] Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.017687 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6hpsd"] Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.019620 4875 scope.go:117] "RemoveContainer" containerID="9ade1006ca5e6d053aefdb283989416b13952322f4a8556e24ff26e640f3d6a5" Jan 30 17:00:15 crc kubenswrapper[4875]: E0130 17:00:15.020128 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ade1006ca5e6d053aefdb283989416b13952322f4a8556e24ff26e640f3d6a5\": container with ID starting with 9ade1006ca5e6d053aefdb283989416b13952322f4a8556e24ff26e640f3d6a5 not found: ID does not exist" containerID="9ade1006ca5e6d053aefdb283989416b13952322f4a8556e24ff26e640f3d6a5" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.020190 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ade1006ca5e6d053aefdb283989416b13952322f4a8556e24ff26e640f3d6a5"} err="failed to get container status \"9ade1006ca5e6d053aefdb283989416b13952322f4a8556e24ff26e640f3d6a5\": rpc error: code = NotFound desc = could not find container \"9ade1006ca5e6d053aefdb283989416b13952322f4a8556e24ff26e640f3d6a5\": container with ID starting with 9ade1006ca5e6d053aefdb283989416b13952322f4a8556e24ff26e640f3d6a5 not found: ID does not exist" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.020225 4875 scope.go:117] "RemoveContainer" containerID="949ab4631216a9b322aadf65137947c5a7c644b50e4419e1694c09a9ba1cd2be" Jan 30 17:00:15 crc kubenswrapper[4875]: E0130 17:00:15.020551 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"949ab4631216a9b322aadf65137947c5a7c644b50e4419e1694c09a9ba1cd2be\": container with ID starting with 949ab4631216a9b322aadf65137947c5a7c644b50e4419e1694c09a9ba1cd2be not found: ID does not exist" containerID="949ab4631216a9b322aadf65137947c5a7c644b50e4419e1694c09a9ba1cd2be" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.020599 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"949ab4631216a9b322aadf65137947c5a7c644b50e4419e1694c09a9ba1cd2be"} err="failed to get container status \"949ab4631216a9b322aadf65137947c5a7c644b50e4419e1694c09a9ba1cd2be\": rpc error: code = NotFound desc = could not find container \"949ab4631216a9b322aadf65137947c5a7c644b50e4419e1694c09a9ba1cd2be\": container with ID starting with 949ab4631216a9b322aadf65137947c5a7c644b50e4419e1694c09a9ba1cd2be not found: ID does not exist" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.020625 4875 scope.go:117] "RemoveContainer" containerID="b668a8d22912ceed4e6196452d7fe76d12c771c427ac3f79e290b4a04e1d73d7" Jan 30 17:00:15 crc kubenswrapper[4875]: E0130 17:00:15.020963 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b668a8d22912ceed4e6196452d7fe76d12c771c427ac3f79e290b4a04e1d73d7\": container with ID starting with b668a8d22912ceed4e6196452d7fe76d12c771c427ac3f79e290b4a04e1d73d7 not found: ID does not exist" containerID="b668a8d22912ceed4e6196452d7fe76d12c771c427ac3f79e290b4a04e1d73d7" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.020987 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b668a8d22912ceed4e6196452d7fe76d12c771c427ac3f79e290b4a04e1d73d7"} err="failed to get container status \"b668a8d22912ceed4e6196452d7fe76d12c771c427ac3f79e290b4a04e1d73d7\": rpc error: code = NotFound desc = could not find container \"b668a8d22912ceed4e6196452d7fe76d12c771c427ac3f79e290b4a04e1d73d7\": container with ID starting with b668a8d22912ceed4e6196452d7fe76d12c771c427ac3f79e290b4a04e1d73d7 not found: ID does not exist" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.021006 4875 scope.go:117] "RemoveContainer" containerID="96678d64fc43b0136ff29bc837c3057c9738314ed8145022426a99eb3afbbc4f" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.022624 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87c78ecd-3fa5-40a9-ac0d-25449555b524-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87c78ecd-3fa5-40a9-ac0d-25449555b524" (UID: "87c78ecd-3fa5-40a9-ac0d-25449555b524"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.025800 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pdr7w"] Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.032784 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pdr7w"] Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.035502 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7544f"] Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.041241 4875 scope.go:117] "RemoveContainer" containerID="c8d0e91b2a453c24666efee49448deb687eb611b4a37cf4b54a202c171107e91" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.042059 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7544f"] Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.050986 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87c78ecd-3fa5-40a9-ac0d-25449555b524-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.051007 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87c78ecd-3fa5-40a9-ac0d-25449555b524-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.051016 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhv4j\" (UniqueName: \"kubernetes.io/projected/87c78ecd-3fa5-40a9-ac0d-25449555b524-kube-api-access-xhv4j\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.053880 4875 scope.go:117] "RemoveContainer" containerID="c430f92ce05b14e06623324156644261e7802e6049396ae79b78953b3070baa5" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.067971 4875 scope.go:117] "RemoveContainer" containerID="96678d64fc43b0136ff29bc837c3057c9738314ed8145022426a99eb3afbbc4f" Jan 30 17:00:15 crc kubenswrapper[4875]: E0130 17:00:15.068265 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96678d64fc43b0136ff29bc837c3057c9738314ed8145022426a99eb3afbbc4f\": container with ID starting with 96678d64fc43b0136ff29bc837c3057c9738314ed8145022426a99eb3afbbc4f not found: ID does not exist" containerID="96678d64fc43b0136ff29bc837c3057c9738314ed8145022426a99eb3afbbc4f" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.068299 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96678d64fc43b0136ff29bc837c3057c9738314ed8145022426a99eb3afbbc4f"} err="failed to get container status \"96678d64fc43b0136ff29bc837c3057c9738314ed8145022426a99eb3afbbc4f\": rpc error: code = NotFound desc = could not find container \"96678d64fc43b0136ff29bc837c3057c9738314ed8145022426a99eb3afbbc4f\": container with ID starting with 96678d64fc43b0136ff29bc837c3057c9738314ed8145022426a99eb3afbbc4f not found: ID does not exist" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.068321 4875 scope.go:117] "RemoveContainer" containerID="c8d0e91b2a453c24666efee49448deb687eb611b4a37cf4b54a202c171107e91" Jan 30 17:00:15 crc kubenswrapper[4875]: E0130 17:00:15.068517 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d0e91b2a453c24666efee49448deb687eb611b4a37cf4b54a202c171107e91\": container with ID starting with c8d0e91b2a453c24666efee49448deb687eb611b4a37cf4b54a202c171107e91 not found: ID does not exist" containerID="c8d0e91b2a453c24666efee49448deb687eb611b4a37cf4b54a202c171107e91" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.068789 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d0e91b2a453c24666efee49448deb687eb611b4a37cf4b54a202c171107e91"} err="failed to get container status \"c8d0e91b2a453c24666efee49448deb687eb611b4a37cf4b54a202c171107e91\": rpc error: code = NotFound desc = could not find container \"c8d0e91b2a453c24666efee49448deb687eb611b4a37cf4b54a202c171107e91\": container with ID starting with c8d0e91b2a453c24666efee49448deb687eb611b4a37cf4b54a202c171107e91 not found: ID does not exist" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.068804 4875 scope.go:117] "RemoveContainer" containerID="c430f92ce05b14e06623324156644261e7802e6049396ae79b78953b3070baa5" Jan 30 17:00:15 crc kubenswrapper[4875]: E0130 17:00:15.069041 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c430f92ce05b14e06623324156644261e7802e6049396ae79b78953b3070baa5\": container with ID starting with c430f92ce05b14e06623324156644261e7802e6049396ae79b78953b3070baa5 not found: ID does not exist" containerID="c430f92ce05b14e06623324156644261e7802e6049396ae79b78953b3070baa5" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.069080 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c430f92ce05b14e06623324156644261e7802e6049396ae79b78953b3070baa5"} err="failed to get container status \"c430f92ce05b14e06623324156644261e7802e6049396ae79b78953b3070baa5\": rpc error: code = NotFound desc = could not find container \"c430f92ce05b14e06623324156644261e7802e6049396ae79b78953b3070baa5\": container with ID starting with c430f92ce05b14e06623324156644261e7802e6049396ae79b78953b3070baa5 not found: ID does not exist" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.069114 4875 scope.go:117] "RemoveContainer" containerID="2d21bd1e721cbfba8e6c1fd3bc4941ebcb69277e211dc78d389d84a935f270c6" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.081770 4875 scope.go:117] "RemoveContainer" containerID="f74a605721a9fa5216417b1df2dbb6ccaf93a462126d2effb4f9ebcef4f54d29" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.093702 4875 scope.go:117] "RemoveContainer" containerID="57db538bdc726299dffc198cf067ddab9d7ee689b969dbb874338f545d9996c5" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.104171 4875 scope.go:117] "RemoveContainer" containerID="2d21bd1e721cbfba8e6c1fd3bc4941ebcb69277e211dc78d389d84a935f270c6" Jan 30 17:00:15 crc kubenswrapper[4875]: E0130 17:00:15.104754 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d21bd1e721cbfba8e6c1fd3bc4941ebcb69277e211dc78d389d84a935f270c6\": container with ID starting with 2d21bd1e721cbfba8e6c1fd3bc4941ebcb69277e211dc78d389d84a935f270c6 not found: ID does not exist" containerID="2d21bd1e721cbfba8e6c1fd3bc4941ebcb69277e211dc78d389d84a935f270c6" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.104862 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d21bd1e721cbfba8e6c1fd3bc4941ebcb69277e211dc78d389d84a935f270c6"} err="failed to get container status \"2d21bd1e721cbfba8e6c1fd3bc4941ebcb69277e211dc78d389d84a935f270c6\": rpc error: code = NotFound desc = could not find container \"2d21bd1e721cbfba8e6c1fd3bc4941ebcb69277e211dc78d389d84a935f270c6\": container with ID starting with 2d21bd1e721cbfba8e6c1fd3bc4941ebcb69277e211dc78d389d84a935f270c6 not found: ID does not exist" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.104955 4875 scope.go:117] "RemoveContainer" containerID="f74a605721a9fa5216417b1df2dbb6ccaf93a462126d2effb4f9ebcef4f54d29" Jan 30 17:00:15 crc kubenswrapper[4875]: E0130 17:00:15.105290 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74a605721a9fa5216417b1df2dbb6ccaf93a462126d2effb4f9ebcef4f54d29\": container with ID starting with f74a605721a9fa5216417b1df2dbb6ccaf93a462126d2effb4f9ebcef4f54d29 not found: ID does not exist" containerID="f74a605721a9fa5216417b1df2dbb6ccaf93a462126d2effb4f9ebcef4f54d29" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.105323 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74a605721a9fa5216417b1df2dbb6ccaf93a462126d2effb4f9ebcef4f54d29"} err="failed to get container status \"f74a605721a9fa5216417b1df2dbb6ccaf93a462126d2effb4f9ebcef4f54d29\": rpc error: code = NotFound desc = could not find container \"f74a605721a9fa5216417b1df2dbb6ccaf93a462126d2effb4f9ebcef4f54d29\": container with ID starting with f74a605721a9fa5216417b1df2dbb6ccaf93a462126d2effb4f9ebcef4f54d29 not found: ID does not exist" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.105347 4875 scope.go:117] "RemoveContainer" containerID="57db538bdc726299dffc198cf067ddab9d7ee689b969dbb874338f545d9996c5" Jan 30 17:00:15 crc kubenswrapper[4875]: E0130 17:00:15.106274 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57db538bdc726299dffc198cf067ddab9d7ee689b969dbb874338f545d9996c5\": container with ID starting with 57db538bdc726299dffc198cf067ddab9d7ee689b969dbb874338f545d9996c5 not found: ID does not exist" containerID="57db538bdc726299dffc198cf067ddab9d7ee689b969dbb874338f545d9996c5" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.106374 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57db538bdc726299dffc198cf067ddab9d7ee689b969dbb874338f545d9996c5"} err="failed to get container status \"57db538bdc726299dffc198cf067ddab9d7ee689b969dbb874338f545d9996c5\": rpc error: code = NotFound desc = could not find container \"57db538bdc726299dffc198cf067ddab9d7ee689b969dbb874338f545d9996c5\": container with ID starting with 57db538bdc726299dffc198cf067ddab9d7ee689b969dbb874338f545d9996c5 not found: ID does not exist" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.106458 4875 scope.go:117] "RemoveContainer" containerID="0f8a00fed494e0a5509d2d10b8c5dc1480faa84d7da571222257f8e452c78291" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.124516 4875 scope.go:117] "RemoveContainer" containerID="8d77807776aa532178722af7c5109fbf353c315afb91ee0d04f8e3bbabfd03b4" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.136221 4875 scope.go:117] "RemoveContainer" containerID="a27afb3ca094c5b4b6e24e72b8d0be184622fa17eff5e525c971e7ab09313162" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.154982 4875 scope.go:117] "RemoveContainer" containerID="0f8a00fed494e0a5509d2d10b8c5dc1480faa84d7da571222257f8e452c78291" Jan 30 17:00:15 crc kubenswrapper[4875]: E0130 17:00:15.155366 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f8a00fed494e0a5509d2d10b8c5dc1480faa84d7da571222257f8e452c78291\": container with ID starting with 0f8a00fed494e0a5509d2d10b8c5dc1480faa84d7da571222257f8e452c78291 not found: ID does not exist" containerID="0f8a00fed494e0a5509d2d10b8c5dc1480faa84d7da571222257f8e452c78291" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.155403 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f8a00fed494e0a5509d2d10b8c5dc1480faa84d7da571222257f8e452c78291"} err="failed to get container status \"0f8a00fed494e0a5509d2d10b8c5dc1480faa84d7da571222257f8e452c78291\": rpc error: code = NotFound desc = could not find container \"0f8a00fed494e0a5509d2d10b8c5dc1480faa84d7da571222257f8e452c78291\": container with ID starting with 0f8a00fed494e0a5509d2d10b8c5dc1480faa84d7da571222257f8e452c78291 not found: ID does not exist" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.155431 4875 scope.go:117] "RemoveContainer" containerID="8d77807776aa532178722af7c5109fbf353c315afb91ee0d04f8e3bbabfd03b4" Jan 30 17:00:15 crc kubenswrapper[4875]: E0130 17:00:15.155906 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d77807776aa532178722af7c5109fbf353c315afb91ee0d04f8e3bbabfd03b4\": container with ID starting with 8d77807776aa532178722af7c5109fbf353c315afb91ee0d04f8e3bbabfd03b4 not found: ID does not exist" containerID="8d77807776aa532178722af7c5109fbf353c315afb91ee0d04f8e3bbabfd03b4" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.155943 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d77807776aa532178722af7c5109fbf353c315afb91ee0d04f8e3bbabfd03b4"} err="failed to get container status \"8d77807776aa532178722af7c5109fbf353c315afb91ee0d04f8e3bbabfd03b4\": rpc error: code = NotFound desc = could not find container \"8d77807776aa532178722af7c5109fbf353c315afb91ee0d04f8e3bbabfd03b4\": container with ID starting with 8d77807776aa532178722af7c5109fbf353c315afb91ee0d04f8e3bbabfd03b4 not found: ID does not exist" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.155982 4875 scope.go:117] "RemoveContainer" containerID="a27afb3ca094c5b4b6e24e72b8d0be184622fa17eff5e525c971e7ab09313162" Jan 30 17:00:15 crc kubenswrapper[4875]: E0130 17:00:15.156276 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a27afb3ca094c5b4b6e24e72b8d0be184622fa17eff5e525c971e7ab09313162\": container with ID starting with a27afb3ca094c5b4b6e24e72b8d0be184622fa17eff5e525c971e7ab09313162 not found: ID does not exist" containerID="a27afb3ca094c5b4b6e24e72b8d0be184622fa17eff5e525c971e7ab09313162" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.156370 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27afb3ca094c5b4b6e24e72b8d0be184622fa17eff5e525c971e7ab09313162"} err="failed to get container status \"a27afb3ca094c5b4b6e24e72b8d0be184622fa17eff5e525c971e7ab09313162\": rpc error: code = NotFound desc = could not find container \"a27afb3ca094c5b4b6e24e72b8d0be184622fa17eff5e525c971e7ab09313162\": container with ID starting with a27afb3ca094c5b4b6e24e72b8d0be184622fa17eff5e525c971e7ab09313162 not found: ID does not exist" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.156463 4875 scope.go:117] "RemoveContainer" containerID="dc68a31a351ff1c2c90c9e1fa1861fbf7af9afeefb86abfb77c1fd1d96523cc0" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.168007 4875 scope.go:117] "RemoveContainer" containerID="dc68a31a351ff1c2c90c9e1fa1861fbf7af9afeefb86abfb77c1fd1d96523cc0" Jan 30 17:00:15 crc kubenswrapper[4875]: E0130 17:00:15.168459 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc68a31a351ff1c2c90c9e1fa1861fbf7af9afeefb86abfb77c1fd1d96523cc0\": container with ID starting with dc68a31a351ff1c2c90c9e1fa1861fbf7af9afeefb86abfb77c1fd1d96523cc0 not found: ID does not exist" containerID="dc68a31a351ff1c2c90c9e1fa1861fbf7af9afeefb86abfb77c1fd1d96523cc0" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.168518 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc68a31a351ff1c2c90c9e1fa1861fbf7af9afeefb86abfb77c1fd1d96523cc0"} err="failed to get container status \"dc68a31a351ff1c2c90c9e1fa1861fbf7af9afeefb86abfb77c1fd1d96523cc0\": rpc error: code = NotFound desc = could not find container \"dc68a31a351ff1c2c90c9e1fa1861fbf7af9afeefb86abfb77c1fd1d96523cc0\": container with ID starting with dc68a31a351ff1c2c90c9e1fa1861fbf7af9afeefb86abfb77c1fd1d96523cc0 not found: ID does not exist" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.240339 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sd4tv"] Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.243311 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sd4tv"] Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.951845 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" event={"ID":"ee16d58a-dd09-48a5-aa90-2788f5bd8fa2","Type":"ContainerStarted","Data":"6d75d5c33f91897db64427271f879c46cb39d9ee257153da279e4e0ea2f51afb"} Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.952396 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.955238 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" Jan 30 17:00:15 crc kubenswrapper[4875]: I0130 17:00:15.969303 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-j9hxl" podStartSLOduration=2.969283818 podStartE2EDuration="2.969283818s" podCreationTimestamp="2026-01-30 17:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:00:15.966200913 +0000 UTC m=+226.513564306" watchObservedRunningTime="2026-01-30 17:00:15.969283818 +0000 UTC m=+226.516647201" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.142516 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" path="/var/lib/kubelet/pods/0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79/volumes" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.143105 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="438bec48-3499-4e88-b9f1-cfb1126424ad" path="/var/lib/kubelet/pods/438bec48-3499-4e88-b9f1-cfb1126424ad/volumes" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.143662 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87c78ecd-3fa5-40a9-ac0d-25449555b524" path="/var/lib/kubelet/pods/87c78ecd-3fa5-40a9-ac0d-25449555b524/volumes" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.144192 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" path="/var/lib/kubelet/pods/926bc7fe-7fc5-4f59-b161-f32ff75b40b3/volumes" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.145551 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beaaba45-df33-4540-ab78-79f1dc92f87b" path="/var/lib/kubelet/pods/beaaba45-df33-4540-ab78-79f1dc92f87b/volumes" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371272 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-496j4"] Jan 30 17:00:16 crc kubenswrapper[4875]: E0130 17:00:16.371501 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438bec48-3499-4e88-b9f1-cfb1126424ad" containerName="extract-content" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371516 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="438bec48-3499-4e88-b9f1-cfb1126424ad" containerName="extract-content" Jan 30 17:00:16 crc kubenswrapper[4875]: E0130 17:00:16.371530 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" containerName="extract-utilities" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371537 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" containerName="extract-utilities" Jan 30 17:00:16 crc kubenswrapper[4875]: E0130 17:00:16.371548 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" containerName="extract-content" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371556 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" containerName="extract-content" Jan 30 17:00:16 crc kubenswrapper[4875]: E0130 17:00:16.371567 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438bec48-3499-4e88-b9f1-cfb1126424ad" containerName="registry-server" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371573 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="438bec48-3499-4e88-b9f1-cfb1126424ad" containerName="registry-server" Jan 30 17:00:16 crc kubenswrapper[4875]: E0130 17:00:16.371597 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" containerName="registry-server" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371606 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" containerName="registry-server" Jan 30 17:00:16 crc kubenswrapper[4875]: E0130 17:00:16.371616 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c78ecd-3fa5-40a9-ac0d-25449555b524" containerName="registry-server" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371623 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c78ecd-3fa5-40a9-ac0d-25449555b524" containerName="registry-server" Jan 30 17:00:16 crc kubenswrapper[4875]: E0130 17:00:16.371633 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beaaba45-df33-4540-ab78-79f1dc92f87b" containerName="marketplace-operator" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371641 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="beaaba45-df33-4540-ab78-79f1dc92f87b" containerName="marketplace-operator" Jan 30 17:00:16 crc kubenswrapper[4875]: E0130 17:00:16.371649 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438bec48-3499-4e88-b9f1-cfb1126424ad" containerName="extract-utilities" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371656 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="438bec48-3499-4e88-b9f1-cfb1126424ad" containerName="extract-utilities" Jan 30 17:00:16 crc kubenswrapper[4875]: E0130 17:00:16.371664 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" containerName="extract-content" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371670 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" containerName="extract-content" Jan 30 17:00:16 crc kubenswrapper[4875]: E0130 17:00:16.371679 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" containerName="registry-server" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371685 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" containerName="registry-server" Jan 30 17:00:16 crc kubenswrapper[4875]: E0130 17:00:16.371696 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c78ecd-3fa5-40a9-ac0d-25449555b524" containerName="extract-content" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371703 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c78ecd-3fa5-40a9-ac0d-25449555b524" containerName="extract-content" Jan 30 17:00:16 crc kubenswrapper[4875]: E0130 17:00:16.371715 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" containerName="extract-utilities" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371722 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" containerName="extract-utilities" Jan 30 17:00:16 crc kubenswrapper[4875]: E0130 17:00:16.371733 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c78ecd-3fa5-40a9-ac0d-25449555b524" containerName="extract-utilities" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371741 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c78ecd-3fa5-40a9-ac0d-25449555b524" containerName="extract-utilities" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371859 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c78ebb3-bc24-4b5e-8ea8-02f2a835bb79" containerName="registry-server" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371875 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="926bc7fe-7fc5-4f59-b161-f32ff75b40b3" containerName="registry-server" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371884 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c78ecd-3fa5-40a9-ac0d-25449555b524" containerName="registry-server" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371895 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="438bec48-3499-4e88-b9f1-cfb1126424ad" containerName="registry-server" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.371906 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="beaaba45-df33-4540-ab78-79f1dc92f87b" containerName="marketplace-operator" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.372715 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.374694 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.383370 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-496j4"] Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.565363 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ac87cd-0125-4818-9369-713bcd27baa1-utilities\") pod \"redhat-marketplace-496j4\" (UID: \"99ac87cd-0125-4818-9369-713bcd27baa1\") " pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.565445 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9zpk\" (UniqueName: \"kubernetes.io/projected/99ac87cd-0125-4818-9369-713bcd27baa1-kube-api-access-h9zpk\") pod \"redhat-marketplace-496j4\" (UID: \"99ac87cd-0125-4818-9369-713bcd27baa1\") " pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.565481 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ac87cd-0125-4818-9369-713bcd27baa1-catalog-content\") pod \"redhat-marketplace-496j4\" (UID: \"99ac87cd-0125-4818-9369-713bcd27baa1\") " pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.569316 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9gm2r"] Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.570320 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.572069 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.584499 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9gm2r"] Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.666823 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ac87cd-0125-4818-9369-713bcd27baa1-utilities\") pod \"redhat-marketplace-496j4\" (UID: \"99ac87cd-0125-4818-9369-713bcd27baa1\") " pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.667114 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9zpk\" (UniqueName: \"kubernetes.io/projected/99ac87cd-0125-4818-9369-713bcd27baa1-kube-api-access-h9zpk\") pod \"redhat-marketplace-496j4\" (UID: \"99ac87cd-0125-4818-9369-713bcd27baa1\") " pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.667227 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ac87cd-0125-4818-9369-713bcd27baa1-catalog-content\") pod \"redhat-marketplace-496j4\" (UID: \"99ac87cd-0125-4818-9369-713bcd27baa1\") " pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.667340 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ac87cd-0125-4818-9369-713bcd27baa1-utilities\") pod \"redhat-marketplace-496j4\" (UID: \"99ac87cd-0125-4818-9369-713bcd27baa1\") " pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.667814 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ac87cd-0125-4818-9369-713bcd27baa1-catalog-content\") pod \"redhat-marketplace-496j4\" (UID: \"99ac87cd-0125-4818-9369-713bcd27baa1\") " pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.686515 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9zpk\" (UniqueName: \"kubernetes.io/projected/99ac87cd-0125-4818-9369-713bcd27baa1-kube-api-access-h9zpk\") pod \"redhat-marketplace-496j4\" (UID: \"99ac87cd-0125-4818-9369-713bcd27baa1\") " pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.694280 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.768741 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvppr\" (UniqueName: \"kubernetes.io/projected/19625989-de41-4994-b07f-6d0880ba073c-kube-api-access-wvppr\") pod \"certified-operators-9gm2r\" (UID: \"19625989-de41-4994-b07f-6d0880ba073c\") " pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.768791 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19625989-de41-4994-b07f-6d0880ba073c-utilities\") pod \"certified-operators-9gm2r\" (UID: \"19625989-de41-4994-b07f-6d0880ba073c\") " pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.768828 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19625989-de41-4994-b07f-6d0880ba073c-catalog-content\") pod \"certified-operators-9gm2r\" (UID: \"19625989-de41-4994-b07f-6d0880ba073c\") " pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.869868 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvppr\" (UniqueName: \"kubernetes.io/projected/19625989-de41-4994-b07f-6d0880ba073c-kube-api-access-wvppr\") pod \"certified-operators-9gm2r\" (UID: \"19625989-de41-4994-b07f-6d0880ba073c\") " pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.870249 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19625989-de41-4994-b07f-6d0880ba073c-utilities\") pod \"certified-operators-9gm2r\" (UID: \"19625989-de41-4994-b07f-6d0880ba073c\") " pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.870279 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19625989-de41-4994-b07f-6d0880ba073c-catalog-content\") pod \"certified-operators-9gm2r\" (UID: \"19625989-de41-4994-b07f-6d0880ba073c\") " pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.870811 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19625989-de41-4994-b07f-6d0880ba073c-catalog-content\") pod \"certified-operators-9gm2r\" (UID: \"19625989-de41-4994-b07f-6d0880ba073c\") " pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.870850 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19625989-de41-4994-b07f-6d0880ba073c-utilities\") pod \"certified-operators-9gm2r\" (UID: \"19625989-de41-4994-b07f-6d0880ba073c\") " pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:16 crc kubenswrapper[4875]: I0130 17:00:16.898838 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvppr\" (UniqueName: \"kubernetes.io/projected/19625989-de41-4994-b07f-6d0880ba073c-kube-api-access-wvppr\") pod \"certified-operators-9gm2r\" (UID: \"19625989-de41-4994-b07f-6d0880ba073c\") " pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:17 crc kubenswrapper[4875]: I0130 17:00:17.089651 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-496j4"] Jan 30 17:00:17 crc kubenswrapper[4875]: W0130 17:00:17.095924 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99ac87cd_0125_4818_9369_713bcd27baa1.slice/crio-8bd95ec365bdad1cb05894cc8c85b0fcafc44379f8d58fb529e34d5591402408 WatchSource:0}: Error finding container 8bd95ec365bdad1cb05894cc8c85b0fcafc44379f8d58fb529e34d5591402408: Status 404 returned error can't find the container with id 8bd95ec365bdad1cb05894cc8c85b0fcafc44379f8d58fb529e34d5591402408 Jan 30 17:00:17 crc kubenswrapper[4875]: I0130 17:00:17.195801 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:17 crc kubenswrapper[4875]: I0130 17:00:17.564557 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9gm2r"] Jan 30 17:00:17 crc kubenswrapper[4875]: W0130 17:00:17.569865 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19625989_de41_4994_b07f_6d0880ba073c.slice/crio-ff9a8cf15ff143200ad5e44255e7e814bfa51242a939dade36ad1865b2af2057 WatchSource:0}: Error finding container ff9a8cf15ff143200ad5e44255e7e814bfa51242a939dade36ad1865b2af2057: Status 404 returned error can't find the container with id ff9a8cf15ff143200ad5e44255e7e814bfa51242a939dade36ad1865b2af2057 Jan 30 17:00:17 crc kubenswrapper[4875]: I0130 17:00:17.966462 4875 generic.go:334] "Generic (PLEG): container finished" podID="19625989-de41-4994-b07f-6d0880ba073c" containerID="78f427ed407f8acbdfb71f0beab73d867595b2891be18d41f3db899209b23ab5" exitCode=0 Jan 30 17:00:17 crc kubenswrapper[4875]: I0130 17:00:17.966523 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gm2r" event={"ID":"19625989-de41-4994-b07f-6d0880ba073c","Type":"ContainerDied","Data":"78f427ed407f8acbdfb71f0beab73d867595b2891be18d41f3db899209b23ab5"} Jan 30 17:00:17 crc kubenswrapper[4875]: I0130 17:00:17.966846 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gm2r" event={"ID":"19625989-de41-4994-b07f-6d0880ba073c","Type":"ContainerStarted","Data":"ff9a8cf15ff143200ad5e44255e7e814bfa51242a939dade36ad1865b2af2057"} Jan 30 17:00:17 crc kubenswrapper[4875]: I0130 17:00:17.972163 4875 generic.go:334] "Generic (PLEG): container finished" podID="99ac87cd-0125-4818-9369-713bcd27baa1" containerID="e920816a6ad602f35f09ea79a6a2c9dc648e601cf64175c6efeea370b0603724" exitCode=0 Jan 30 17:00:17 crc kubenswrapper[4875]: I0130 17:00:17.972434 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-496j4" event={"ID":"99ac87cd-0125-4818-9369-713bcd27baa1","Type":"ContainerDied","Data":"e920816a6ad602f35f09ea79a6a2c9dc648e601cf64175c6efeea370b0603724"} Jan 30 17:00:17 crc kubenswrapper[4875]: I0130 17:00:17.972472 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-496j4" event={"ID":"99ac87cd-0125-4818-9369-713bcd27baa1","Type":"ContainerStarted","Data":"8bd95ec365bdad1cb05894cc8c85b0fcafc44379f8d58fb529e34d5591402408"} Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.774974 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bfpqk"] Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.776097 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.777636 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.789641 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bfpqk"] Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.893325 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc32276d-2194-4ac4-9a86-da06d803d46d-catalog-content\") pod \"community-operators-bfpqk\" (UID: \"dc32276d-2194-4ac4-9a86-da06d803d46d\") " pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.893401 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc32276d-2194-4ac4-9a86-da06d803d46d-utilities\") pod \"community-operators-bfpqk\" (UID: \"dc32276d-2194-4ac4-9a86-da06d803d46d\") " pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.893443 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrtz\" (UniqueName: \"kubernetes.io/projected/dc32276d-2194-4ac4-9a86-da06d803d46d-kube-api-access-jqrtz\") pod \"community-operators-bfpqk\" (UID: \"dc32276d-2194-4ac4-9a86-da06d803d46d\") " pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.917810 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7597544687-scv9p"] Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.918018 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7597544687-scv9p" podUID="e5eebfb0-ab5b-40dc-9927-38e711b5eddf" containerName="controller-manager" containerID="cri-o://fc856d850074943e5fc424998ef95a978fce3daa43ec1141c314bb18a5446731" gracePeriod=30 Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.976284 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gct2f"] Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.977416 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.978844 4875 generic.go:334] "Generic (PLEG): container finished" podID="19625989-de41-4994-b07f-6d0880ba073c" containerID="775524c2e3772f7304e280518e3da374e4b0466cb54ea08bd64d74338a19277e" exitCode=0 Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.978882 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gm2r" event={"ID":"19625989-de41-4994-b07f-6d0880ba073c","Type":"ContainerDied","Data":"775524c2e3772f7304e280518e3da374e4b0466cb54ea08bd64d74338a19277e"} Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.979468 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.988932 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gct2f"] Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.995120 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqjqx\" (UniqueName: \"kubernetes.io/projected/6596cd04-1bed-410b-8304-70d475ba79ee-kube-api-access-gqjqx\") pod \"redhat-operators-gct2f\" (UID: \"6596cd04-1bed-410b-8304-70d475ba79ee\") " pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.995157 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc32276d-2194-4ac4-9a86-da06d803d46d-catalog-content\") pod \"community-operators-bfpqk\" (UID: \"dc32276d-2194-4ac4-9a86-da06d803d46d\") " pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.995177 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6596cd04-1bed-410b-8304-70d475ba79ee-catalog-content\") pod \"redhat-operators-gct2f\" (UID: \"6596cd04-1bed-410b-8304-70d475ba79ee\") " pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.995272 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc32276d-2194-4ac4-9a86-da06d803d46d-utilities\") pod \"community-operators-bfpqk\" (UID: \"dc32276d-2194-4ac4-9a86-da06d803d46d\") " pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.995308 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6596cd04-1bed-410b-8304-70d475ba79ee-utilities\") pod \"redhat-operators-gct2f\" (UID: \"6596cd04-1bed-410b-8304-70d475ba79ee\") " pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.995331 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqrtz\" (UniqueName: \"kubernetes.io/projected/dc32276d-2194-4ac4-9a86-da06d803d46d-kube-api-access-jqrtz\") pod \"community-operators-bfpqk\" (UID: \"dc32276d-2194-4ac4-9a86-da06d803d46d\") " pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.995611 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc32276d-2194-4ac4-9a86-da06d803d46d-catalog-content\") pod \"community-operators-bfpqk\" (UID: \"dc32276d-2194-4ac4-9a86-da06d803d46d\") " pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:18 crc kubenswrapper[4875]: I0130 17:00:18.995804 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc32276d-2194-4ac4-9a86-da06d803d46d-utilities\") pod \"community-operators-bfpqk\" (UID: \"dc32276d-2194-4ac4-9a86-da06d803d46d\") " pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.015308 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqrtz\" (UniqueName: \"kubernetes.io/projected/dc32276d-2194-4ac4-9a86-da06d803d46d-kube-api-access-jqrtz\") pod \"community-operators-bfpqk\" (UID: \"dc32276d-2194-4ac4-9a86-da06d803d46d\") " pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.030885 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4"] Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.031121 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" podUID="5f74a1a2-7858-49c3-bb89-2c209bfefb32" containerName="route-controller-manager" containerID="cri-o://8f95fa8c387a04038bafb5baf3859d7c88e4d7f4a3a57caa7268e7d70164d143" gracePeriod=30 Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.096396 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6596cd04-1bed-410b-8304-70d475ba79ee-utilities\") pod \"redhat-operators-gct2f\" (UID: \"6596cd04-1bed-410b-8304-70d475ba79ee\") " pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.096471 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqjqx\" (UniqueName: \"kubernetes.io/projected/6596cd04-1bed-410b-8304-70d475ba79ee-kube-api-access-gqjqx\") pod \"redhat-operators-gct2f\" (UID: \"6596cd04-1bed-410b-8304-70d475ba79ee\") " pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.096502 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6596cd04-1bed-410b-8304-70d475ba79ee-catalog-content\") pod \"redhat-operators-gct2f\" (UID: \"6596cd04-1bed-410b-8304-70d475ba79ee\") " pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.097233 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6596cd04-1bed-410b-8304-70d475ba79ee-utilities\") pod \"redhat-operators-gct2f\" (UID: \"6596cd04-1bed-410b-8304-70d475ba79ee\") " pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.097345 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6596cd04-1bed-410b-8304-70d475ba79ee-catalog-content\") pod \"redhat-operators-gct2f\" (UID: \"6596cd04-1bed-410b-8304-70d475ba79ee\") " pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.118200 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqjqx\" (UniqueName: \"kubernetes.io/projected/6596cd04-1bed-410b-8304-70d475ba79ee-kube-api-access-gqjqx\") pod \"redhat-operators-gct2f\" (UID: \"6596cd04-1bed-410b-8304-70d475ba79ee\") " pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.130017 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.360697 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.478723 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.510968 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.606364 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f74a1a2-7858-49c3-bb89-2c209bfefb32-client-ca\") pod \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.606425 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f74a1a2-7858-49c3-bb89-2c209bfefb32-serving-cert\") pod \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.606450 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f74a1a2-7858-49c3-bb89-2c209bfefb32-config\") pod \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.606485 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb8zm\" (UniqueName: \"kubernetes.io/projected/5f74a1a2-7858-49c3-bb89-2c209bfefb32-kube-api-access-tb8zm\") pod \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\" (UID: \"5f74a1a2-7858-49c3-bb89-2c209bfefb32\") " Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.606685 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-config\") pod \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.607055 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f74a1a2-7858-49c3-bb89-2c209bfefb32-client-ca" (OuterVolumeSpecName: "client-ca") pod "5f74a1a2-7858-49c3-bb89-2c209bfefb32" (UID: "5f74a1a2-7858-49c3-bb89-2c209bfefb32"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.607524 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-config" (OuterVolumeSpecName: "config") pod "e5eebfb0-ab5b-40dc-9927-38e711b5eddf" (UID: "e5eebfb0-ab5b-40dc-9927-38e711b5eddf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.607904 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f74a1a2-7858-49c3-bb89-2c209bfefb32-config" (OuterVolumeSpecName: "config") pod "5f74a1a2-7858-49c3-bb89-2c209bfefb32" (UID: "5f74a1a2-7858-49c3-bb89-2c209bfefb32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.611783 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f74a1a2-7858-49c3-bb89-2c209bfefb32-kube-api-access-tb8zm" (OuterVolumeSpecName: "kube-api-access-tb8zm") pod "5f74a1a2-7858-49c3-bb89-2c209bfefb32" (UID: "5f74a1a2-7858-49c3-bb89-2c209bfefb32"). InnerVolumeSpecName "kube-api-access-tb8zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.612136 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f74a1a2-7858-49c3-bb89-2c209bfefb32-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5f74a1a2-7858-49c3-bb89-2c209bfefb32" (UID: "5f74a1a2-7858-49c3-bb89-2c209bfefb32"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.633804 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bfpqk"] Jan 30 17:00:19 crc kubenswrapper[4875]: W0130 17:00:19.639794 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc32276d_2194_4ac4_9a86_da06d803d46d.slice/crio-56dad41d1d83e4598f32b5491851cfa74e716521d72994702337fdff8d8670a5 WatchSource:0}: Error finding container 56dad41d1d83e4598f32b5491851cfa74e716521d72994702337fdff8d8670a5: Status 404 returned error can't find the container with id 56dad41d1d83e4598f32b5491851cfa74e716521d72994702337fdff8d8670a5 Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.707360 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-client-ca\") pod \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.707406 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5np2p\" (UniqueName: \"kubernetes.io/projected/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-kube-api-access-5np2p\") pod \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.707438 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-serving-cert\") pod \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.707480 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-proxy-ca-bundles\") pod \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\" (UID: \"e5eebfb0-ab5b-40dc-9927-38e711b5eddf\") " Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.707668 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.707681 4875 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f74a1a2-7858-49c3-bb89-2c209bfefb32-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.707690 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f74a1a2-7858-49c3-bb89-2c209bfefb32-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.707699 4875 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f74a1a2-7858-49c3-bb89-2c209bfefb32-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.707708 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb8zm\" (UniqueName: \"kubernetes.io/projected/5f74a1a2-7858-49c3-bb89-2c209bfefb32-kube-api-access-tb8zm\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.707777 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-client-ca" (OuterVolumeSpecName: "client-ca") pod "e5eebfb0-ab5b-40dc-9927-38e711b5eddf" (UID: "e5eebfb0-ab5b-40dc-9927-38e711b5eddf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.708260 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e5eebfb0-ab5b-40dc-9927-38e711b5eddf" (UID: "e5eebfb0-ab5b-40dc-9927-38e711b5eddf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.710734 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-kube-api-access-5np2p" (OuterVolumeSpecName: "kube-api-access-5np2p") pod "e5eebfb0-ab5b-40dc-9927-38e711b5eddf" (UID: "e5eebfb0-ab5b-40dc-9927-38e711b5eddf"). InnerVolumeSpecName "kube-api-access-5np2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.711149 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e5eebfb0-ab5b-40dc-9927-38e711b5eddf" (UID: "e5eebfb0-ab5b-40dc-9927-38e711b5eddf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.781863 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gct2f"] Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.808459 4875 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.808492 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5np2p\" (UniqueName: \"kubernetes.io/projected/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-kube-api-access-5np2p\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.808509 4875 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.808521 4875 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5eebfb0-ab5b-40dc-9927-38e711b5eddf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:19 crc kubenswrapper[4875]: W0130 17:00:19.815050 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6596cd04_1bed_410b_8304_70d475ba79ee.slice/crio-58d801a39eb88a692e175182a0edf868f506f7e622fb94d686b1677db33a67a9 WatchSource:0}: Error finding container 58d801a39eb88a692e175182a0edf868f506f7e622fb94d686b1677db33a67a9: Status 404 returned error can't find the container with id 58d801a39eb88a692e175182a0edf868f506f7e622fb94d686b1677db33a67a9 Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.986139 4875 generic.go:334] "Generic (PLEG): container finished" podID="5f74a1a2-7858-49c3-bb89-2c209bfefb32" containerID="8f95fa8c387a04038bafb5baf3859d7c88e4d7f4a3a57caa7268e7d70164d143" exitCode=0 Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.986204 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.986191 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" event={"ID":"5f74a1a2-7858-49c3-bb89-2c209bfefb32","Type":"ContainerDied","Data":"8f95fa8c387a04038bafb5baf3859d7c88e4d7f4a3a57caa7268e7d70164d143"} Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.986528 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4" event={"ID":"5f74a1a2-7858-49c3-bb89-2c209bfefb32","Type":"ContainerDied","Data":"fd62dc24b0f626b8897e9a6e1d10b9c8d1894d678517f72a0369cb1c09538866"} Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.986552 4875 scope.go:117] "RemoveContainer" containerID="8f95fa8c387a04038bafb5baf3859d7c88e4d7f4a3a57caa7268e7d70164d143" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.988124 4875 generic.go:334] "Generic (PLEG): container finished" podID="e5eebfb0-ab5b-40dc-9927-38e711b5eddf" containerID="fc856d850074943e5fc424998ef95a978fce3daa43ec1141c314bb18a5446731" exitCode=0 Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.988159 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7597544687-scv9p" event={"ID":"e5eebfb0-ab5b-40dc-9927-38e711b5eddf","Type":"ContainerDied","Data":"fc856d850074943e5fc424998ef95a978fce3daa43ec1141c314bb18a5446731"} Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.988183 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7597544687-scv9p" event={"ID":"e5eebfb0-ab5b-40dc-9927-38e711b5eddf","Type":"ContainerDied","Data":"a142d3b79d4996e7d6806548a2c438a49f231f812ba795800800e2ebb5a3b2a2"} Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.988219 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7597544687-scv9p" Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.990369 4875 generic.go:334] "Generic (PLEG): container finished" podID="6596cd04-1bed-410b-8304-70d475ba79ee" containerID="18fa2ef1a0bccdfcfaf83412b7e33048963788e1bd3bb2e00684679894e8f6da" exitCode=0 Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.990402 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gct2f" event={"ID":"6596cd04-1bed-410b-8304-70d475ba79ee","Type":"ContainerDied","Data":"18fa2ef1a0bccdfcfaf83412b7e33048963788e1bd3bb2e00684679894e8f6da"} Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.990432 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gct2f" event={"ID":"6596cd04-1bed-410b-8304-70d475ba79ee","Type":"ContainerStarted","Data":"58d801a39eb88a692e175182a0edf868f506f7e622fb94d686b1677db33a67a9"} Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.991929 4875 generic.go:334] "Generic (PLEG): container finished" podID="dc32276d-2194-4ac4-9a86-da06d803d46d" containerID="a7cea36d6792520e81cf52f9980e0ecec6a5718943217093d54055ad17105859" exitCode=0 Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.991957 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfpqk" event={"ID":"dc32276d-2194-4ac4-9a86-da06d803d46d","Type":"ContainerDied","Data":"a7cea36d6792520e81cf52f9980e0ecec6a5718943217093d54055ad17105859"} Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.991992 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfpqk" event={"ID":"dc32276d-2194-4ac4-9a86-da06d803d46d","Type":"ContainerStarted","Data":"56dad41d1d83e4598f32b5491851cfa74e716521d72994702337fdff8d8670a5"} Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.994140 4875 generic.go:334] "Generic (PLEG): container finished" podID="99ac87cd-0125-4818-9369-713bcd27baa1" containerID="0d6d6c163fd20bdf6acfb99ae4573af9a97d5b9206bcc9c1a1941eebea11c91a" exitCode=0 Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.994419 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-496j4" event={"ID":"99ac87cd-0125-4818-9369-713bcd27baa1","Type":"ContainerDied","Data":"0d6d6c163fd20bdf6acfb99ae4573af9a97d5b9206bcc9c1a1941eebea11c91a"} Jan 30 17:00:19 crc kubenswrapper[4875]: I0130 17:00:19.999356 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gm2r" event={"ID":"19625989-de41-4994-b07f-6d0880ba073c","Type":"ContainerStarted","Data":"c9c675481037d4c6108a084ba81e8feef9beca19ef426dee1ff8b8a74aa8b7d1"} Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.014383 4875 scope.go:117] "RemoveContainer" containerID="8f95fa8c387a04038bafb5baf3859d7c88e4d7f4a3a57caa7268e7d70164d143" Jan 30 17:00:20 crc kubenswrapper[4875]: E0130 17:00:20.014933 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f95fa8c387a04038bafb5baf3859d7c88e4d7f4a3a57caa7268e7d70164d143\": container with ID starting with 8f95fa8c387a04038bafb5baf3859d7c88e4d7f4a3a57caa7268e7d70164d143 not found: ID does not exist" containerID="8f95fa8c387a04038bafb5baf3859d7c88e4d7f4a3a57caa7268e7d70164d143" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.014993 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f95fa8c387a04038bafb5baf3859d7c88e4d7f4a3a57caa7268e7d70164d143"} err="failed to get container status \"8f95fa8c387a04038bafb5baf3859d7c88e4d7f4a3a57caa7268e7d70164d143\": rpc error: code = NotFound desc = could not find container \"8f95fa8c387a04038bafb5baf3859d7c88e4d7f4a3a57caa7268e7d70164d143\": container with ID starting with 8f95fa8c387a04038bafb5baf3859d7c88e4d7f4a3a57caa7268e7d70164d143 not found: ID does not exist" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.015066 4875 scope.go:117] "RemoveContainer" containerID="fc856d850074943e5fc424998ef95a978fce3daa43ec1141c314bb18a5446731" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.033683 4875 scope.go:117] "RemoveContainer" containerID="fc856d850074943e5fc424998ef95a978fce3daa43ec1141c314bb18a5446731" Jan 30 17:00:20 crc kubenswrapper[4875]: E0130 17:00:20.034511 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc856d850074943e5fc424998ef95a978fce3daa43ec1141c314bb18a5446731\": container with ID starting with fc856d850074943e5fc424998ef95a978fce3daa43ec1141c314bb18a5446731 not found: ID does not exist" containerID="fc856d850074943e5fc424998ef95a978fce3daa43ec1141c314bb18a5446731" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.034554 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc856d850074943e5fc424998ef95a978fce3daa43ec1141c314bb18a5446731"} err="failed to get container status \"fc856d850074943e5fc424998ef95a978fce3daa43ec1141c314bb18a5446731\": rpc error: code = NotFound desc = could not find container \"fc856d850074943e5fc424998ef95a978fce3daa43ec1141c314bb18a5446731\": container with ID starting with fc856d850074943e5fc424998ef95a978fce3daa43ec1141c314bb18a5446731 not found: ID does not exist" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.049444 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9gm2r" podStartSLOduration=2.577523186 podStartE2EDuration="4.049426259s" podCreationTimestamp="2026-01-30 17:00:16 +0000 UTC" firstStartedPulling="2026-01-30 17:00:17.967897884 +0000 UTC m=+228.515261267" lastFinishedPulling="2026-01-30 17:00:19.439800957 +0000 UTC m=+229.987164340" observedRunningTime="2026-01-30 17:00:20.046111416 +0000 UTC m=+230.593474799" watchObservedRunningTime="2026-01-30 17:00:20.049426259 +0000 UTC m=+230.596789642" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.074957 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7597544687-scv9p"] Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.089555 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7597544687-scv9p"] Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.093450 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4"] Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.097450 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64586c844b-jvmq4"] Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.143540 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f74a1a2-7858-49c3-bb89-2c209bfefb32" path="/var/lib/kubelet/pods/5f74a1a2-7858-49c3-bb89-2c209bfefb32/volumes" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.144542 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5eebfb0-ab5b-40dc-9927-38e711b5eddf" path="/var/lib/kubelet/pods/e5eebfb0-ab5b-40dc-9927-38e711b5eddf/volumes" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.433810 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt"] Jan 30 17:00:20 crc kubenswrapper[4875]: E0130 17:00:20.434066 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f74a1a2-7858-49c3-bb89-2c209bfefb32" containerName="route-controller-manager" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.434082 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f74a1a2-7858-49c3-bb89-2c209bfefb32" containerName="route-controller-manager" Jan 30 17:00:20 crc kubenswrapper[4875]: E0130 17:00:20.434105 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5eebfb0-ab5b-40dc-9927-38e711b5eddf" containerName="controller-manager" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.434114 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5eebfb0-ab5b-40dc-9927-38e711b5eddf" containerName="controller-manager" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.434227 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5eebfb0-ab5b-40dc-9927-38e711b5eddf" containerName="controller-manager" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.434246 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f74a1a2-7858-49c3-bb89-2c209bfefb32" containerName="route-controller-manager" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.434665 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.437400 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.437599 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.437941 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.438008 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq"] Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.438080 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.438156 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.438268 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.438622 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.443959 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.444163 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.444240 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.448306 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.450114 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.451416 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.453813 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq"] Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.454745 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.458670 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt"] Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.516566 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24fe7b6e-d390-45b4-9eb3-deb616ec1729-serving-cert\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.516665 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24fe7b6e-d390-45b4-9eb3-deb616ec1729-client-ca\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.516729 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjfwd\" (UniqueName: \"kubernetes.io/projected/24fe7b6e-d390-45b4-9eb3-deb616ec1729-kube-api-access-bjfwd\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.516779 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24fe7b6e-d390-45b4-9eb3-deb616ec1729-proxy-ca-bundles\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.516798 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr86q\" (UniqueName: \"kubernetes.io/projected/57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d-kube-api-access-xr86q\") pod \"route-controller-manager-5f5f686d7d-vdlkt\" (UID: \"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d\") " pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.516816 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d-client-ca\") pod \"route-controller-manager-5f5f686d7d-vdlkt\" (UID: \"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d\") " pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.516865 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d-config\") pod \"route-controller-manager-5f5f686d7d-vdlkt\" (UID: \"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d\") " pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.516884 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24fe7b6e-d390-45b4-9eb3-deb616ec1729-config\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.516899 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d-serving-cert\") pod \"route-controller-manager-5f5f686d7d-vdlkt\" (UID: \"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d\") " pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.617912 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24fe7b6e-d390-45b4-9eb3-deb616ec1729-serving-cert\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.617968 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24fe7b6e-d390-45b4-9eb3-deb616ec1729-client-ca\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.617989 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjfwd\" (UniqueName: \"kubernetes.io/projected/24fe7b6e-d390-45b4-9eb3-deb616ec1729-kube-api-access-bjfwd\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.618017 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr86q\" (UniqueName: \"kubernetes.io/projected/57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d-kube-api-access-xr86q\") pod \"route-controller-manager-5f5f686d7d-vdlkt\" (UID: \"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d\") " pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.618034 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24fe7b6e-d390-45b4-9eb3-deb616ec1729-proxy-ca-bundles\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.618050 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d-client-ca\") pod \"route-controller-manager-5f5f686d7d-vdlkt\" (UID: \"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d\") " pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.618075 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d-config\") pod \"route-controller-manager-5f5f686d7d-vdlkt\" (UID: \"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d\") " pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.618094 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24fe7b6e-d390-45b4-9eb3-deb616ec1729-config\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.618111 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d-serving-cert\") pod \"route-controller-manager-5f5f686d7d-vdlkt\" (UID: \"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d\") " pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.620713 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d-client-ca\") pod \"route-controller-manager-5f5f686d7d-vdlkt\" (UID: \"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d\") " pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.620806 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24fe7b6e-d390-45b4-9eb3-deb616ec1729-client-ca\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.621036 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d-config\") pod \"route-controller-manager-5f5f686d7d-vdlkt\" (UID: \"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d\") " pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.621064 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24fe7b6e-d390-45b4-9eb3-deb616ec1729-proxy-ca-bundles\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.622326 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24fe7b6e-d390-45b4-9eb3-deb616ec1729-config\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.631606 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d-serving-cert\") pod \"route-controller-manager-5f5f686d7d-vdlkt\" (UID: \"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d\") " pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.631923 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24fe7b6e-d390-45b4-9eb3-deb616ec1729-serving-cert\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.643169 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjfwd\" (UniqueName: \"kubernetes.io/projected/24fe7b6e-d390-45b4-9eb3-deb616ec1729-kube-api-access-bjfwd\") pod \"controller-manager-6fd8fbbc8c-g8qdq\" (UID: \"24fe7b6e-d390-45b4-9eb3-deb616ec1729\") " pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.643792 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr86q\" (UniqueName: \"kubernetes.io/projected/57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d-kube-api-access-xr86q\") pod \"route-controller-manager-5f5f686d7d-vdlkt\" (UID: \"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d\") " pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.751941 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:20 crc kubenswrapper[4875]: I0130 17:00:20.778873 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:21 crc kubenswrapper[4875]: I0130 17:00:21.022155 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq"] Jan 30 17:00:21 crc kubenswrapper[4875]: I0130 17:00:21.027164 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-496j4" event={"ID":"99ac87cd-0125-4818-9369-713bcd27baa1","Type":"ContainerStarted","Data":"3c539efc722fa682b1a3cbd34de3da7039bacd006f4f17c0fe02f0c5fd9f043b"} Jan 30 17:00:21 crc kubenswrapper[4875]: W0130 17:00:21.042144 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24fe7b6e_d390_45b4_9eb3_deb616ec1729.slice/crio-9645c111f6e613a9a0c3c78353a708c327ac8ef778e4eaf23ee3f22d3e5f4af8 WatchSource:0}: Error finding container 9645c111f6e613a9a0c3c78353a708c327ac8ef778e4eaf23ee3f22d3e5f4af8: Status 404 returned error can't find the container with id 9645c111f6e613a9a0c3c78353a708c327ac8ef778e4eaf23ee3f22d3e5f4af8 Jan 30 17:00:21 crc kubenswrapper[4875]: I0130 17:00:21.045061 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-496j4" podStartSLOduration=2.514082863 podStartE2EDuration="5.045043136s" podCreationTimestamp="2026-01-30 17:00:16 +0000 UTC" firstStartedPulling="2026-01-30 17:00:17.976152176 +0000 UTC m=+228.523515559" lastFinishedPulling="2026-01-30 17:00:20.507112449 +0000 UTC m=+231.054475832" observedRunningTime="2026-01-30 17:00:21.044406434 +0000 UTC m=+231.591769817" watchObservedRunningTime="2026-01-30 17:00:21.045043136 +0000 UTC m=+231.592406529" Jan 30 17:00:21 crc kubenswrapper[4875]: I0130 17:00:21.199027 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt"] Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.033140 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" event={"ID":"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d","Type":"ContainerStarted","Data":"1274935abb1c955371b6fc2544ef0a50704f1d93f90a3c53f14290e478f90c56"} Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.033184 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" event={"ID":"57bd8ba0-dd88-46c7-97fe-f6b4c3798c5d","Type":"ContainerStarted","Data":"1c34904be12d4e11613237e48fb0cf445d91e325b16665f723e2e0abef21a724"} Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.033282 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.034931 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" event={"ID":"24fe7b6e-d390-45b4-9eb3-deb616ec1729","Type":"ContainerStarted","Data":"4d51cdd93014ac245289da6e9aed5b58c07cd4b2ea1a22e06211cb00e2ba3a52"} Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.034971 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" event={"ID":"24fe7b6e-d390-45b4-9eb3-deb616ec1729","Type":"ContainerStarted","Data":"9645c111f6e613a9a0c3c78353a708c327ac8ef778e4eaf23ee3f22d3e5f4af8"} Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.035198 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.038160 4875 generic.go:334] "Generic (PLEG): container finished" podID="6596cd04-1bed-410b-8304-70d475ba79ee" containerID="ff7ad3479077441faa3d5f8dea0c5ffed76c9e26bdf0b3fd72ad005e465f7aa4" exitCode=0 Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.038237 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gct2f" event={"ID":"6596cd04-1bed-410b-8304-70d475ba79ee","Type":"ContainerDied","Data":"ff7ad3479077441faa3d5f8dea0c5ffed76c9e26bdf0b3fd72ad005e465f7aa4"} Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.040115 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.040608 4875 generic.go:334] "Generic (PLEG): container finished" podID="dc32276d-2194-4ac4-9a86-da06d803d46d" containerID="9a0b01049a7b1902a6fa345ee5dbe04eda721b74cf46c7ad033e3329096aaf56" exitCode=0 Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.041058 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfpqk" event={"ID":"dc32276d-2194-4ac4-9a86-da06d803d46d","Type":"ContainerDied","Data":"9a0b01049a7b1902a6fa345ee5dbe04eda721b74cf46c7ad033e3329096aaf56"} Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.041295 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.055727 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f5f686d7d-vdlkt" podStartSLOduration=3.055709847 podStartE2EDuration="3.055709847s" podCreationTimestamp="2026-01-30 17:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:00:22.054528557 +0000 UTC m=+232.601891950" watchObservedRunningTime="2026-01-30 17:00:22.055709847 +0000 UTC m=+232.603073230" Jan 30 17:00:22 crc kubenswrapper[4875]: I0130 17:00:22.087897 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fd8fbbc8c-g8qdq" podStartSLOduration=4.087881144 podStartE2EDuration="4.087881144s" podCreationTimestamp="2026-01-30 17:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:00:22.071170355 +0000 UTC m=+232.618533738" watchObservedRunningTime="2026-01-30 17:00:22.087881144 +0000 UTC m=+232.635244527" Jan 30 17:00:23 crc kubenswrapper[4875]: I0130 17:00:23.047277 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gct2f" event={"ID":"6596cd04-1bed-410b-8304-70d475ba79ee","Type":"ContainerStarted","Data":"caa5578a627b211d62e6a1ce9f18e9475002964723fcb7e63215e3e6d7be19f4"} Jan 30 17:00:23 crc kubenswrapper[4875]: I0130 17:00:23.050604 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfpqk" event={"ID":"dc32276d-2194-4ac4-9a86-da06d803d46d","Type":"ContainerStarted","Data":"5647757fe7653f1aa258c584cf401b01490bc441b976255aff27e287834c3294"} Jan 30 17:00:23 crc kubenswrapper[4875]: I0130 17:00:23.083394 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gct2f" podStartSLOduration=2.633832411 podStartE2EDuration="5.083373257s" podCreationTimestamp="2026-01-30 17:00:18 +0000 UTC" firstStartedPulling="2026-01-30 17:00:20.002428156 +0000 UTC m=+230.549791539" lastFinishedPulling="2026-01-30 17:00:22.451969002 +0000 UTC m=+232.999332385" observedRunningTime="2026-01-30 17:00:23.067698753 +0000 UTC m=+233.615062136" watchObservedRunningTime="2026-01-30 17:00:23.083373257 +0000 UTC m=+233.630736640" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.695191 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.695782 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.745108 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.762691 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bfpqk" podStartSLOduration=6.05830095 podStartE2EDuration="8.762670048s" podCreationTimestamp="2026-01-30 17:00:18 +0000 UTC" firstStartedPulling="2026-01-30 17:00:19.993232612 +0000 UTC m=+230.540595995" lastFinishedPulling="2026-01-30 17:00:22.69760171 +0000 UTC m=+233.244965093" observedRunningTime="2026-01-30 17:00:23.086008357 +0000 UTC m=+233.633371740" watchObservedRunningTime="2026-01-30 17:00:26.762670048 +0000 UTC m=+237.310033431" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.969135 4875 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.969934 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.972836 4875 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.972950 4875 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.974363 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95" gracePeriod=15 Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.974667 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666" gracePeriod=15 Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.974695 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e" gracePeriod=15 Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.974797 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b" gracePeriod=15 Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.974803 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7" gracePeriod=15 Jan 30 17:00:26 crc kubenswrapper[4875]: E0130 17:00:26.986653 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.987012 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 17:00:26 crc kubenswrapper[4875]: E0130 17:00:26.987031 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.987047 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 17:00:26 crc kubenswrapper[4875]: E0130 17:00:26.987082 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.987095 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 17:00:26 crc kubenswrapper[4875]: E0130 17:00:26.987111 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.987123 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 17:00:26 crc kubenswrapper[4875]: E0130 17:00:26.987166 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.987256 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 17:00:26 crc kubenswrapper[4875]: E0130 17:00:26.987295 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.987305 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.987857 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.987883 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.987902 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.987918 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.987928 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.987947 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 17:00:26 crc kubenswrapper[4875]: E0130 17:00:26.988180 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 17:00:26 crc kubenswrapper[4875]: I0130 17:00:26.988192 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.015972 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.016057 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.016146 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.016215 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.016249 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.016300 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.016336 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.016372 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.017742 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.113892 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-496j4" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.114609 4875 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.114971 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.115448 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.117965 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118037 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118074 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118090 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118113 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118146 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118162 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118186 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118264 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118310 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118482 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118514 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118563 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118629 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.118655 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.119344 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.119923 4875 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.119980 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.197085 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.197326 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.237669 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.238421 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.238806 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.239356 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.239672 4875 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:27 crc kubenswrapper[4875]: I0130 17:00:27.315811 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:00:27 crc kubenswrapper[4875]: E0130 17:00:27.342623 4875 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.65:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f90d5fdd92464 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 17:00:27.34119434 +0000 UTC m=+237.888557733,LastTimestamp:2026-01-30 17:00:27.34119434 +0000 UTC m=+237.888557733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.082470 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.083646 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.084324 4875 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7" exitCode=0 Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.084367 4875 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e" exitCode=0 Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.084376 4875 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666" exitCode=0 Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.084383 4875 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b" exitCode=2 Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.084384 4875 scope.go:117] "RemoveContainer" containerID="92e418cad9ae26085498c94e2629e2f620bdef83e49b3d6d7abffae372ef677d" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.086212 4875 generic.go:334] "Generic (PLEG): container finished" podID="d957892e-e8ab-4817-8690-7cb2613af5af" containerID="5ed87c912071597cb67b0845d1975d6ce62087ae18c5294af7282f924e8412a7" exitCode=0 Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.086262 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d957892e-e8ab-4817-8690-7cb2613af5af","Type":"ContainerDied","Data":"5ed87c912071597cb67b0845d1975d6ce62087ae18c5294af7282f924e8412a7"} Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.086977 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.087354 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.087577 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.087777 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.087945 4875 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.088682 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"480b1c7ca112d5042c88140a13f4b797cbf983e2f4553a36846136dfb5953c9c"} Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.088711 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4e084a09e8ef23f398316639aefdf3ec7500280e3501de5b99081a5878a0d05e"} Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.089087 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.089235 4875 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.089375 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.090000 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.090185 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.133017 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.133487 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.133747 4875 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.134034 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.134363 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:28 crc kubenswrapper[4875]: I0130 17:00:28.134578 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.104802 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.130738 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.130798 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.190229 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.190612 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.190893 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.191216 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.191467 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.191731 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.360994 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.361296 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.419888 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.420453 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.420792 4875 status_manager.go:851] "Failed to get status for pod" podUID="6596cd04-1bed-410b-8304-70d475ba79ee" pod="openshift-marketplace/redhat-operators-gct2f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gct2f\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.421137 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.421555 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.421766 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.422202 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.492864 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.493366 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.493636 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.494095 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.494604 4875 status_manager.go:851] "Failed to get status for pod" podUID="6596cd04-1bed-410b-8304-70d475ba79ee" pod="openshift-marketplace/redhat-operators-gct2f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gct2f\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.494829 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.495064 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.546745 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d957892e-e8ab-4817-8690-7cb2613af5af-var-lock\") pod \"d957892e-e8ab-4817-8690-7cb2613af5af\" (UID: \"d957892e-e8ab-4817-8690-7cb2613af5af\") " Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.546805 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d957892e-e8ab-4817-8690-7cb2613af5af-kube-api-access\") pod \"d957892e-e8ab-4817-8690-7cb2613af5af\" (UID: \"d957892e-e8ab-4817-8690-7cb2613af5af\") " Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.546819 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d957892e-e8ab-4817-8690-7cb2613af5af-var-lock" (OuterVolumeSpecName: "var-lock") pod "d957892e-e8ab-4817-8690-7cb2613af5af" (UID: "d957892e-e8ab-4817-8690-7cb2613af5af"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.546897 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d957892e-e8ab-4817-8690-7cb2613af5af-kubelet-dir\") pod \"d957892e-e8ab-4817-8690-7cb2613af5af\" (UID: \"d957892e-e8ab-4817-8690-7cb2613af5af\") " Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.546990 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d957892e-e8ab-4817-8690-7cb2613af5af-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d957892e-e8ab-4817-8690-7cb2613af5af" (UID: "d957892e-e8ab-4817-8690-7cb2613af5af"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.547106 4875 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d957892e-e8ab-4817-8690-7cb2613af5af-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.547119 4875 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d957892e-e8ab-4817-8690-7cb2613af5af-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.552101 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d957892e-e8ab-4817-8690-7cb2613af5af-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d957892e-e8ab-4817-8690-7cb2613af5af" (UID: "d957892e-e8ab-4817-8690-7cb2613af5af"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.564225 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.565003 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.565601 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.565984 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.566256 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.566671 4875 status_manager.go:851] "Failed to get status for pod" podUID="6596cd04-1bed-410b-8304-70d475ba79ee" pod="openshift-marketplace/redhat-operators-gct2f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gct2f\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.566917 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.567186 4875 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.567417 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.648054 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.648162 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.648243 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.648321 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.648332 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.648410 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.648840 4875 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.648894 4875 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.648922 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d957892e-e8ab-4817-8690-7cb2613af5af-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:29 crc kubenswrapper[4875]: I0130 17:00:29.648948 4875 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:29 crc kubenswrapper[4875]: E0130 17:00:29.783770 4875 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.65:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f90d5fdd92464 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 17:00:27.34119434 +0000 UTC m=+237.888557733,LastTimestamp:2026-01-30 17:00:27.34119434 +0000 UTC m=+237.888557733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.113991 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.114909 4875 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95" exitCode=0 Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.114994 4875 scope.go:117] "RemoveContainer" containerID="079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.114994 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.117464 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d957892e-e8ab-4817-8690-7cb2613af5af","Type":"ContainerDied","Data":"56ec36140678933d455c65e974dd23a99c72d923d7bccdc8f66618e3139f9f7e"} Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.117537 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56ec36140678933d455c65e974dd23a99c72d923d7bccdc8f66618e3139f9f7e" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.117598 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.140821 4875 scope.go:117] "RemoveContainer" containerID="308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.144475 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.145032 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.148472 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.148798 4875 status_manager.go:851] "Failed to get status for pod" podUID="6596cd04-1bed-410b-8304-70d475ba79ee" pod="openshift-marketplace/redhat-operators-gct2f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gct2f\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.149086 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.149362 4875 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.149673 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.150072 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.150354 4875 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.150668 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.150964 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.151301 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.151942 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.152107 4875 status_manager.go:851] "Failed to get status for pod" podUID="6596cd04-1bed-410b-8304-70d475ba79ee" pod="openshift-marketplace/redhat-operators-gct2f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gct2f\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.155055 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.165396 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bfpqk" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.168629 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.168985 4875 scope.go:117] "RemoveContainer" containerID="fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.169268 4875 status_manager.go:851] "Failed to get status for pod" podUID="6596cd04-1bed-410b-8304-70d475ba79ee" pod="openshift-marketplace/redhat-operators-gct2f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gct2f\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.171474 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.172252 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gct2f" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.172383 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.173013 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.173207 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.173739 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.174282 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.174772 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.174947 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.175096 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.175395 4875 status_manager.go:851] "Failed to get status for pod" podUID="6596cd04-1bed-410b-8304-70d475ba79ee" pod="openshift-marketplace/redhat-operators-gct2f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gct2f\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.205226 4875 scope.go:117] "RemoveContainer" containerID="bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.223145 4875 scope.go:117] "RemoveContainer" containerID="2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.260524 4875 scope.go:117] "RemoveContainer" containerID="5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.281411 4875 scope.go:117] "RemoveContainer" containerID="079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7" Jan 30 17:00:30 crc kubenswrapper[4875]: E0130 17:00:30.281730 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\": container with ID starting with 079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7 not found: ID does not exist" containerID="079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.281776 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7"} err="failed to get container status \"079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\": rpc error: code = NotFound desc = could not find container \"079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7\": container with ID starting with 079d8acee71d14644dab0eb049aff78fbb36359227fb4df7e09f86c849accad7 not found: ID does not exist" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.281804 4875 scope.go:117] "RemoveContainer" containerID="308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e" Jan 30 17:00:30 crc kubenswrapper[4875]: E0130 17:00:30.282151 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\": container with ID starting with 308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e not found: ID does not exist" containerID="308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.282186 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e"} err="failed to get container status \"308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\": rpc error: code = NotFound desc = could not find container \"308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e\": container with ID starting with 308ac6bec889d2f5cd2a9520874be2c0615761582f2fbbc0382f952e3f1b4b6e not found: ID does not exist" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.282211 4875 scope.go:117] "RemoveContainer" containerID="fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666" Jan 30 17:00:30 crc kubenswrapper[4875]: E0130 17:00:30.282531 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\": container with ID starting with fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666 not found: ID does not exist" containerID="fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.282558 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666"} err="failed to get container status \"fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\": rpc error: code = NotFound desc = could not find container \"fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666\": container with ID starting with fa0f69cbdc90b9f8260df20fd05d4e88a94f91e5a9af2b0179d94275fba90666 not found: ID does not exist" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.282573 4875 scope.go:117] "RemoveContainer" containerID="bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b" Jan 30 17:00:30 crc kubenswrapper[4875]: E0130 17:00:30.283029 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\": container with ID starting with bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b not found: ID does not exist" containerID="bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.283054 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b"} err="failed to get container status \"bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\": rpc error: code = NotFound desc = could not find container \"bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b\": container with ID starting with bd750224302bbcb32d6e15ebe94c789d34949d301ed52bee89d9d4ab756e601b not found: ID does not exist" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.283070 4875 scope.go:117] "RemoveContainer" containerID="2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95" Jan 30 17:00:30 crc kubenswrapper[4875]: E0130 17:00:30.283322 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\": container with ID starting with 2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95 not found: ID does not exist" containerID="2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.283340 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95"} err="failed to get container status \"2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\": rpc error: code = NotFound desc = could not find container \"2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95\": container with ID starting with 2108dfe648e0c5e0a377170db2fee1cea70197f066b746d6409a005959d7bc95 not found: ID does not exist" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.283352 4875 scope.go:117] "RemoveContainer" containerID="5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923" Jan 30 17:00:30 crc kubenswrapper[4875]: E0130 17:00:30.283632 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\": container with ID starting with 5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923 not found: ID does not exist" containerID="5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923" Jan 30 17:00:30 crc kubenswrapper[4875]: I0130 17:00:30.283656 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923"} err="failed to get container status \"5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\": rpc error: code = NotFound desc = could not find container \"5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923\": container with ID starting with 5530e05cb0c365bb13305a2a63b60bed37c7994f13e2ac62af8c524dd3e75923 not found: ID does not exist" Jan 30 17:00:31 crc kubenswrapper[4875]: E0130 17:00:31.327136 4875 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:31 crc kubenswrapper[4875]: E0130 17:00:31.327526 4875 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:31 crc kubenswrapper[4875]: E0130 17:00:31.328003 4875 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:31 crc kubenswrapper[4875]: E0130 17:00:31.328245 4875 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:31 crc kubenswrapper[4875]: E0130 17:00:31.328488 4875 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:31 crc kubenswrapper[4875]: I0130 17:00:31.328517 4875 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 17:00:31 crc kubenswrapper[4875]: E0130 17:00:31.328786 4875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="200ms" Jan 30 17:00:31 crc kubenswrapper[4875]: E0130 17:00:31.530440 4875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="400ms" Jan 30 17:00:31 crc kubenswrapper[4875]: E0130 17:00:31.931087 4875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="800ms" Jan 30 17:00:32 crc kubenswrapper[4875]: E0130 17:00:32.406014 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd957892e_e8ab_4817_8690_7cb2613af5af.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:00:32 crc kubenswrapper[4875]: E0130 17:00:32.731844 4875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="1.6s" Jan 30 17:00:34 crc kubenswrapper[4875]: E0130 17:00:34.333071 4875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="3.2s" Jan 30 17:00:37 crc kubenswrapper[4875]: I0130 17:00:37.454683 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" podUID="a764d0e3-2762-4d13-b92e-30e68c104bf6" containerName="oauth-openshift" containerID="cri-o://4d3ea55d59a5904fcd2b94de812a53a149c4e4deb5cc2e371f131b8f105e1208" gracePeriod=15 Jan 30 17:00:37 crc kubenswrapper[4875]: E0130 17:00:37.533684 4875 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="6.4s" Jan 30 17:00:37 crc kubenswrapper[4875]: I0130 17:00:37.927022 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 17:00:37 crc kubenswrapper[4875]: I0130 17:00:37.927871 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:37 crc kubenswrapper[4875]: I0130 17:00:37.928126 4875 status_manager.go:851] "Failed to get status for pod" podUID="6596cd04-1bed-410b-8304-70d475ba79ee" pod="openshift-marketplace/redhat-operators-gct2f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gct2f\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:37 crc kubenswrapper[4875]: I0130 17:00:37.928412 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:37 crc kubenswrapper[4875]: I0130 17:00:37.928639 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:37 crc kubenswrapper[4875]: I0130 17:00:37.928796 4875 status_manager.go:851] "Failed to get status for pod" podUID="a764d0e3-2762-4d13-b92e-30e68c104bf6" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-gv6jw\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:37 crc kubenswrapper[4875]: I0130 17:00:37.928974 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:37 crc kubenswrapper[4875]: I0130 17:00:37.929123 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.054727 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-login\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.054785 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a764d0e3-2762-4d13-b92e-30e68c104bf6-audit-dir\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.054809 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-provider-selection\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.054864 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-trusted-ca-bundle\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.054883 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-idp-0-file-data\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.054929 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndkxr\" (UniqueName: \"kubernetes.io/projected/a764d0e3-2762-4d13-b92e-30e68c104bf6-kube-api-access-ndkxr\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.054965 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-cliconfig\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.054990 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-ocp-branding-template\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.055012 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-service-ca\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.055035 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-router-certs\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.055056 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-session\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.055089 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-serving-cert\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.055109 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-audit-policies\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.055132 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-error\") pod \"a764d0e3-2762-4d13-b92e-30e68c104bf6\" (UID: \"a764d0e3-2762-4d13-b92e-30e68c104bf6\") " Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.055919 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a764d0e3-2762-4d13-b92e-30e68c104bf6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.057225 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.057250 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.057557 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.057920 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.061175 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.061437 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a764d0e3-2762-4d13-b92e-30e68c104bf6-kube-api-access-ndkxr" (OuterVolumeSpecName: "kube-api-access-ndkxr") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "kube-api-access-ndkxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.061667 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.061951 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.062558 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.062764 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.062926 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.063498 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.068866 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a764d0e3-2762-4d13-b92e-30e68c104bf6" (UID: "a764d0e3-2762-4d13-b92e-30e68c104bf6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.155866 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.155894 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.155904 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndkxr\" (UniqueName: \"kubernetes.io/projected/a764d0e3-2762-4d13-b92e-30e68c104bf6-kube-api-access-ndkxr\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.155915 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.155924 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.155933 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.155942 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.155951 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.155973 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.155984 4875 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a764d0e3-2762-4d13-b92e-30e68c104bf6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.155994 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.156003 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.156012 4875 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a764d0e3-2762-4d13-b92e-30e68c104bf6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.156020 4875 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a764d0e3-2762-4d13-b92e-30e68c104bf6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.161807 4875 generic.go:334] "Generic (PLEG): container finished" podID="a764d0e3-2762-4d13-b92e-30e68c104bf6" containerID="4d3ea55d59a5904fcd2b94de812a53a149c4e4deb5cc2e371f131b8f105e1208" exitCode=0 Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.161859 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" event={"ID":"a764d0e3-2762-4d13-b92e-30e68c104bf6","Type":"ContainerDied","Data":"4d3ea55d59a5904fcd2b94de812a53a149c4e4deb5cc2e371f131b8f105e1208"} Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.161895 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" event={"ID":"a764d0e3-2762-4d13-b92e-30e68c104bf6","Type":"ContainerDied","Data":"2fef2e01831b17a5d310ab2236793ede88081f6378f05d6f9be272312407298f"} Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.161919 4875 scope.go:117] "RemoveContainer" containerID="4d3ea55d59a5904fcd2b94de812a53a149c4e4deb5cc2e371f131b8f105e1208" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.162048 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.163202 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.163474 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.163755 4875 status_manager.go:851] "Failed to get status for pod" podUID="a764d0e3-2762-4d13-b92e-30e68c104bf6" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-gv6jw\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.164044 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.164318 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.164663 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.165361 4875 status_manager.go:851] "Failed to get status for pod" podUID="6596cd04-1bed-410b-8304-70d475ba79ee" pod="openshift-marketplace/redhat-operators-gct2f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gct2f\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.165905 4875 status_manager.go:851] "Failed to get status for pod" podUID="a764d0e3-2762-4d13-b92e-30e68c104bf6" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-gv6jw\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.166195 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.166458 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.167801 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.168053 4875 status_manager.go:851] "Failed to get status for pod" podUID="6596cd04-1bed-410b-8304-70d475ba79ee" pod="openshift-marketplace/redhat-operators-gct2f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gct2f\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.168286 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.168551 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.183297 4875 scope.go:117] "RemoveContainer" containerID="4d3ea55d59a5904fcd2b94de812a53a149c4e4deb5cc2e371f131b8f105e1208" Jan 30 17:00:38 crc kubenswrapper[4875]: E0130 17:00:38.183927 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d3ea55d59a5904fcd2b94de812a53a149c4e4deb5cc2e371f131b8f105e1208\": container with ID starting with 4d3ea55d59a5904fcd2b94de812a53a149c4e4deb5cc2e371f131b8f105e1208 not found: ID does not exist" containerID="4d3ea55d59a5904fcd2b94de812a53a149c4e4deb5cc2e371f131b8f105e1208" Jan 30 17:00:38 crc kubenswrapper[4875]: I0130 17:00:38.183962 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3ea55d59a5904fcd2b94de812a53a149c4e4deb5cc2e371f131b8f105e1208"} err="failed to get container status \"4d3ea55d59a5904fcd2b94de812a53a149c4e4deb5cc2e371f131b8f105e1208\": rpc error: code = NotFound desc = could not find container \"4d3ea55d59a5904fcd2b94de812a53a149c4e4deb5cc2e371f131b8f105e1208\": container with ID starting with 4d3ea55d59a5904fcd2b94de812a53a149c4e4deb5cc2e371f131b8f105e1208 not found: ID does not exist" Jan 30 17:00:39 crc kubenswrapper[4875]: E0130 17:00:39.785360 4875 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.65:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f90d5fdd92464 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 17:00:27.34119434 +0000 UTC m=+237.888557733,LastTimestamp:2026-01-30 17:00:27.34119434 +0000 UTC m=+237.888557733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.135732 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.139116 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.139644 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.139982 4875 status_manager.go:851] "Failed to get status for pod" podUID="a764d0e3-2762-4d13-b92e-30e68c104bf6" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-gv6jw\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.140289 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.140834 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.141110 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.141380 4875 status_manager.go:851] "Failed to get status for pod" podUID="6596cd04-1bed-410b-8304-70d475ba79ee" pod="openshift-marketplace/redhat-operators-gct2f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gct2f\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.141742 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.141981 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.142229 4875 status_manager.go:851] "Failed to get status for pod" podUID="a764d0e3-2762-4d13-b92e-30e68c104bf6" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-gv6jw\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.142496 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.142724 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.142981 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.143259 4875 status_manager.go:851] "Failed to get status for pod" podUID="6596cd04-1bed-410b-8304-70d475ba79ee" pod="openshift-marketplace/redhat-operators-gct2f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gct2f\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.150976 4875 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="958d4578-6434-4ac1-8cb6-b20988d13e90" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.151020 4875 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="958d4578-6434-4ac1-8cb6-b20988d13e90" Jan 30 17:00:40 crc kubenswrapper[4875]: E0130 17:00:40.151408 4875 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:40 crc kubenswrapper[4875]: I0130 17:00:40.151880 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:40 crc kubenswrapper[4875]: W0130 17:00:40.183245 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-480650bd6b6741accecac0da7dffe123f32c7a3d41a0e5dd628dae573048375c WatchSource:0}: Error finding container 480650bd6b6741accecac0da7dffe123f32c7a3d41a0e5dd628dae573048375c: Status 404 returned error can't find the container with id 480650bd6b6741accecac0da7dffe123f32c7a3d41a0e5dd628dae573048375c Jan 30 17:00:41 crc kubenswrapper[4875]: I0130 17:00:41.186418 4875 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="3ce71711e1d59dbbceff0a999a46ead4b4cd32f0367b346b4abde928d0b02b8e" exitCode=0 Jan 30 17:00:41 crc kubenswrapper[4875]: I0130 17:00:41.186466 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"3ce71711e1d59dbbceff0a999a46ead4b4cd32f0367b346b4abde928d0b02b8e"} Jan 30 17:00:41 crc kubenswrapper[4875]: I0130 17:00:41.186497 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"480650bd6b6741accecac0da7dffe123f32c7a3d41a0e5dd628dae573048375c"} Jan 30 17:00:41 crc kubenswrapper[4875]: I0130 17:00:41.186986 4875 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="958d4578-6434-4ac1-8cb6-b20988d13e90" Jan 30 17:00:41 crc kubenswrapper[4875]: I0130 17:00:41.187011 4875 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="958d4578-6434-4ac1-8cb6-b20988d13e90" Jan 30 17:00:41 crc kubenswrapper[4875]: E0130 17:00:41.188155 4875 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:41 crc kubenswrapper[4875]: I0130 17:00:41.188163 4875 status_manager.go:851] "Failed to get status for pod" podUID="99ac87cd-0125-4818-9369-713bcd27baa1" pod="openshift-marketplace/redhat-marketplace-496j4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-496j4\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:41 crc kubenswrapper[4875]: I0130 17:00:41.188742 4875 status_manager.go:851] "Failed to get status for pod" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:41 crc kubenswrapper[4875]: I0130 17:00:41.189311 4875 status_manager.go:851] "Failed to get status for pod" podUID="a764d0e3-2762-4d13-b92e-30e68c104bf6" pod="openshift-authentication/oauth-openshift-558db77b4-gv6jw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-gv6jw\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:41 crc kubenswrapper[4875]: I0130 17:00:41.189818 4875 status_manager.go:851] "Failed to get status for pod" podUID="19625989-de41-4994-b07f-6d0880ba073c" pod="openshift-marketplace/certified-operators-9gm2r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-9gm2r\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:41 crc kubenswrapper[4875]: I0130 17:00:41.190239 4875 status_manager.go:851] "Failed to get status for pod" podUID="dc32276d-2194-4ac4-9a86-da06d803d46d" pod="openshift-marketplace/community-operators-bfpqk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bfpqk\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:41 crc kubenswrapper[4875]: I0130 17:00:41.191888 4875 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:41 crc kubenswrapper[4875]: I0130 17:00:41.192416 4875 status_manager.go:851] "Failed to get status for pod" podUID="6596cd04-1bed-410b-8304-70d475ba79ee" pod="openshift-marketplace/redhat-operators-gct2f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-gct2f\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 30 17:00:41 crc kubenswrapper[4875]: E0130 17:00:41.199938 4875 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" volumeName="registry-storage" Jan 30 17:00:42 crc kubenswrapper[4875]: I0130 17:00:42.208564 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 17:00:42 crc kubenswrapper[4875]: I0130 17:00:42.209147 4875 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92" exitCode=1 Jan 30 17:00:42 crc kubenswrapper[4875]: I0130 17:00:42.209279 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92"} Jan 30 17:00:42 crc kubenswrapper[4875]: I0130 17:00:42.210305 4875 scope.go:117] "RemoveContainer" containerID="87b36ddb911ca1e64973a711f167432c07ccde8ad806ceb03457752137420e92" Jan 30 17:00:42 crc kubenswrapper[4875]: I0130 17:00:42.213924 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2eef73abbe99a64ca895d9d46ebbd857810658ca5fbaabb8c33924ec061a7f71"} Jan 30 17:00:42 crc kubenswrapper[4875]: I0130 17:00:42.213978 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"42a5bdd5b6ef4031b8c158db4939bcc36c6fcbaf0edd9eea821701351ac46332"} Jan 30 17:00:42 crc kubenswrapper[4875]: I0130 17:00:42.214004 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6332786e6cab7e5006664c0958faebaf2f10e7b9228a2d1d5453d47a32e06b56"} Jan 30 17:00:42 crc kubenswrapper[4875]: I0130 17:00:42.214021 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0431ab01e686b45cc1c7b91bfe20d47354a1fd9a102a15d04c6d73f227d25bce"} Jan 30 17:00:42 crc kubenswrapper[4875]: I0130 17:00:42.478685 4875 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 17:00:42 crc kubenswrapper[4875]: E0130 17:00:42.540305 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd957892e_e8ab_4817_8690_7cb2613af5af.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:00:43 crc kubenswrapper[4875]: I0130 17:00:43.221173 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 17:00:43 crc kubenswrapper[4875]: I0130 17:00:43.221520 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ae6c834983a913da65efba494e5eb9efb3db46acf66127356b6e1dd561b3911b"} Jan 30 17:00:43 crc kubenswrapper[4875]: I0130 17:00:43.225122 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f746108d114f74d372d9be8048984087680519d8791a594190cfabfc19f15f02"} Jan 30 17:00:43 crc kubenswrapper[4875]: I0130 17:00:43.225334 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:43 crc kubenswrapper[4875]: I0130 17:00:43.225420 4875 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="958d4578-6434-4ac1-8cb6-b20988d13e90" Jan 30 17:00:43 crc kubenswrapper[4875]: I0130 17:00:43.225442 4875 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="958d4578-6434-4ac1-8cb6-b20988d13e90" Jan 30 17:00:44 crc kubenswrapper[4875]: I0130 17:00:44.035624 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 17:00:44 crc kubenswrapper[4875]: I0130 17:00:44.038892 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 17:00:44 crc kubenswrapper[4875]: I0130 17:00:44.229342 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 17:00:45 crc kubenswrapper[4875]: I0130 17:00:45.152440 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:45 crc kubenswrapper[4875]: I0130 17:00:45.152496 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:45 crc kubenswrapper[4875]: I0130 17:00:45.161228 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:48 crc kubenswrapper[4875]: I0130 17:00:48.235669 4875 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:48 crc kubenswrapper[4875]: I0130 17:00:48.253561 4875 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="958d4578-6434-4ac1-8cb6-b20988d13e90" Jan 30 17:00:48 crc kubenswrapper[4875]: I0130 17:00:48.253605 4875 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="958d4578-6434-4ac1-8cb6-b20988d13e90" Jan 30 17:00:48 crc kubenswrapper[4875]: I0130 17:00:48.257462 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:00:49 crc kubenswrapper[4875]: I0130 17:00:49.257829 4875 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="958d4578-6434-4ac1-8cb6-b20988d13e90" Jan 30 17:00:49 crc kubenswrapper[4875]: I0130 17:00:49.257867 4875 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="958d4578-6434-4ac1-8cb6-b20988d13e90" Jan 30 17:00:50 crc kubenswrapper[4875]: I0130 17:00:50.155631 4875 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b244846d-8e36-484f-91fa-05f504f12965" Jan 30 17:00:52 crc kubenswrapper[4875]: E0130 17:00:52.675092 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podd957892e_e8ab_4817_8690_7cb2613af5af.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:00:58 crc kubenswrapper[4875]: I0130 17:00:58.206347 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 17:00:58 crc kubenswrapper[4875]: I0130 17:00:58.370471 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 17:00:58 crc kubenswrapper[4875]: I0130 17:00:58.609143 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 17:00:58 crc kubenswrapper[4875]: I0130 17:00:58.990791 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 17:00:59 crc kubenswrapper[4875]: I0130 17:00:59.067891 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 17:00:59 crc kubenswrapper[4875]: I0130 17:00:59.107176 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 17:00:59 crc kubenswrapper[4875]: I0130 17:00:59.450480 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 17:00:59 crc kubenswrapper[4875]: I0130 17:00:59.530068 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 17:00:59 crc kubenswrapper[4875]: I0130 17:00:59.559965 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 17:00:59 crc kubenswrapper[4875]: I0130 17:00:59.772484 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 17:00:59 crc kubenswrapper[4875]: I0130 17:00:59.815744 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 17:00:59 crc kubenswrapper[4875]: I0130 17:00:59.836222 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 17:00:59 crc kubenswrapper[4875]: I0130 17:00:59.992300 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 17:01:00 crc kubenswrapper[4875]: I0130 17:01:00.036706 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 17:01:00 crc kubenswrapper[4875]: I0130 17:01:00.307356 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 17:01:00 crc kubenswrapper[4875]: I0130 17:01:00.340194 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 17:01:00 crc kubenswrapper[4875]: I0130 17:01:00.354957 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 17:01:00 crc kubenswrapper[4875]: I0130 17:01:00.371396 4875 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 17:01:00 crc kubenswrapper[4875]: I0130 17:01:00.374158 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=33.374145727 podStartE2EDuration="33.374145727s" podCreationTimestamp="2026-01-30 17:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:00:47.934259445 +0000 UTC m=+258.481622828" watchObservedRunningTime="2026-01-30 17:01:00.374145727 +0000 UTC m=+270.921509110" Jan 30 17:01:00 crc kubenswrapper[4875]: I0130 17:01:00.375522 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-gv6jw"] Jan 30 17:01:00 crc kubenswrapper[4875]: I0130 17:01:00.375568 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 17:01:00 crc kubenswrapper[4875]: I0130 17:01:00.383740 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 17:01:00 crc kubenswrapper[4875]: I0130 17:01:00.403458 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.403437905 podStartE2EDuration="12.403437905s" podCreationTimestamp="2026-01-30 17:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:01:00.397912562 +0000 UTC m=+270.945275965" watchObservedRunningTime="2026-01-30 17:01:00.403437905 +0000 UTC m=+270.950801288" Jan 30 17:01:00 crc kubenswrapper[4875]: I0130 17:01:00.455232 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 17:01:00 crc kubenswrapper[4875]: I0130 17:01:00.918169 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 17:01:00 crc kubenswrapper[4875]: I0130 17:01:00.965300 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.036376 4875 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.094100 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.150094 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.257153 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.285366 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.322863 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.385918 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.390433 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.490286 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.500446 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.507322 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.688659 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.729708 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.753554 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.757775 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.769632 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.874335 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.889051 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 17:01:01 crc kubenswrapper[4875]: I0130 17:01:01.966021 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.022727 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.144207 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a764d0e3-2762-4d13-b92e-30e68c104bf6" path="/var/lib/kubelet/pods/a764d0e3-2762-4d13-b92e-30e68c104bf6/volumes" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.227639 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.285041 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.353732 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.355380 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.428153 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.667442 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.709240 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 17:01:02 crc kubenswrapper[4875]: E0130 17:01:02.796660 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd957892e_e8ab_4817_8690_7cb2613af5af.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.858535 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.887789 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.892695 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.951740 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 17:01:02 crc kubenswrapper[4875]: I0130 17:01:02.970488 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 17:01:03 crc kubenswrapper[4875]: I0130 17:01:03.156674 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 17:01:03 crc kubenswrapper[4875]: I0130 17:01:03.343471 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 17:01:03 crc kubenswrapper[4875]: I0130 17:01:03.355017 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 17:01:03 crc kubenswrapper[4875]: I0130 17:01:03.355266 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 17:01:03 crc kubenswrapper[4875]: I0130 17:01:03.547712 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 17:01:03 crc kubenswrapper[4875]: I0130 17:01:03.610762 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 17:01:03 crc kubenswrapper[4875]: I0130 17:01:03.673691 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 17:01:03 crc kubenswrapper[4875]: I0130 17:01:03.750230 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 17:01:03 crc kubenswrapper[4875]: I0130 17:01:03.980892 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.012247 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.021108 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.067232 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.144702 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.167941 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.168328 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.229189 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.234736 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.242116 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.278630 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.282898 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.338451 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.358946 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.386504 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.391065 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.585946 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.617559 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.626749 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.635478 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.649692 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.779857 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.819107 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.822997 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79cb59f449-xskcq"] Jan 30 17:01:04 crc kubenswrapper[4875]: E0130 17:01:04.823170 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a764d0e3-2762-4d13-b92e-30e68c104bf6" containerName="oauth-openshift" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.823185 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="a764d0e3-2762-4d13-b92e-30e68c104bf6" containerName="oauth-openshift" Jan 30 17:01:04 crc kubenswrapper[4875]: E0130 17:01:04.823215 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" containerName="installer" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.823222 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" containerName="installer" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.823307 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="a764d0e3-2762-4d13-b92e-30e68c104bf6" containerName="oauth-openshift" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.823315 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="d957892e-e8ab-4817-8690-7cb2613af5af" containerName="installer" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.823678 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.829709 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.829954 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.829989 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.830009 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.829832 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.830173 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.830209 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.830457 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.830815 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.831040 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.831051 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.831205 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.833797 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.839551 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79cb59f449-xskcq"] Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.840689 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.842117 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.846487 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896171 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-session\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896255 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896290 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm5b6\" (UniqueName: \"kubernetes.io/projected/8a556d31-5835-404d-9df4-8678769fab13-kube-api-access-zm5b6\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896318 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896341 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a556d31-5835-404d-9df4-8678769fab13-audit-policies\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896370 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896399 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896418 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-service-ca\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896451 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896517 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896544 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-user-template-error\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896645 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a556d31-5835-404d-9df4-8678769fab13-audit-dir\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896684 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-user-template-login\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.896707 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-router-certs\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.925313 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.932649 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.933465 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.934761 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.962317 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.975872 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.979001 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997129 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-session\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997176 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997195 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm5b6\" (UniqueName: \"kubernetes.io/projected/8a556d31-5835-404d-9df4-8678769fab13-kube-api-access-zm5b6\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997213 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997229 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a556d31-5835-404d-9df4-8678769fab13-audit-policies\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997246 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997265 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997283 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-service-ca\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997300 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997329 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997352 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-user-template-error\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997388 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a556d31-5835-404d-9df4-8678769fab13-audit-dir\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997419 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-user-template-login\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997446 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-router-certs\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.997726 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a556d31-5835-404d-9df4-8678769fab13-audit-dir\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.998447 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a556d31-5835-404d-9df4-8678769fab13-audit-policies\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.998773 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-service-ca\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.998798 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:04 crc kubenswrapper[4875]: I0130 17:01:04.998990 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.002073 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.002386 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-session\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.002418 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-user-template-error\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.002563 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-router-certs\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.003189 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.003451 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.004872 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.006770 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a556d31-5835-404d-9df4-8678769fab13-v4-0-config-user-template-login\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.017975 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm5b6\" (UniqueName: \"kubernetes.io/projected/8a556d31-5835-404d-9df4-8678769fab13-kube-api-access-zm5b6\") pod \"oauth-openshift-79cb59f449-xskcq\" (UID: \"8a556d31-5835-404d-9df4-8678769fab13\") " pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.037217 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.047874 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.151910 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.187036 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.271234 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.322169 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.343553 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.418838 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.541492 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.592747 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.602154 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79cb59f449-xskcq"] Jan 30 17:01:05 crc kubenswrapper[4875]: W0130 17:01:05.607501 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a556d31_5835_404d_9df4_8678769fab13.slice/crio-3cafd8ae481f77fdd35fa550cff48bdf99228d4f3ce6c40c16e95f8cbe917d10 WatchSource:0}: Error finding container 3cafd8ae481f77fdd35fa550cff48bdf99228d4f3ce6c40c16e95f8cbe917d10: Status 404 returned error can't find the container with id 3cafd8ae481f77fdd35fa550cff48bdf99228d4f3ce6c40c16e95f8cbe917d10 Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.751944 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.851180 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.860352 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.886977 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.934803 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 17:01:05 crc kubenswrapper[4875]: I0130 17:01:05.980646 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.008153 4875 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.082862 4875 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.240196 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.291742 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.317617 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.351895 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" event={"ID":"8a556d31-5835-404d-9df4-8678769fab13","Type":"ContainerStarted","Data":"34b6f4894b1578477e98d0b44e7b9d242af3759088b2ea22f4550798ddbf47fd"} Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.352365 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.352457 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" event={"ID":"8a556d31-5835-404d-9df4-8678769fab13","Type":"ContainerStarted","Data":"3cafd8ae481f77fdd35fa550cff48bdf99228d4f3ce6c40c16e95f8cbe917d10"} Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.376886 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" podStartSLOduration=54.376865124 podStartE2EDuration="54.376865124s" podCreationTimestamp="2026-01-30 17:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:01:06.374335957 +0000 UTC m=+276.921699340" watchObservedRunningTime="2026-01-30 17:01:06.376865124 +0000 UTC m=+276.924228517" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.390260 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.521986 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.620737 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79cb59f449-xskcq" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.737230 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.806610 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.880688 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.959484 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.975878 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.975934 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 17:01:06 crc kubenswrapper[4875]: I0130 17:01:06.987696 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.055165 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.069198 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.093458 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.183609 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.246651 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.278981 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.514490 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.604715 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.673562 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.732776 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.758825 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.814342 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.872265 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.962036 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.962241 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.979774 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.990967 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 17:01:07 crc kubenswrapper[4875]: I0130 17:01:07.994089 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.003377 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.008575 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.086265 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.147222 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.190765 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.215984 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.281916 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.328833 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.370889 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.396632 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.520453 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.566137 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.654845 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.678227 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 17:01:08 crc kubenswrapper[4875]: I0130 17:01:08.908018 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.069910 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.104213 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.138890 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.303474 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.314779 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.489779 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.506897 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.522772 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.536036 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.573545 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.602723 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.604736 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.666919 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.705612 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.708306 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.806102 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.824490 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.907277 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.909752 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 17:01:09 crc kubenswrapper[4875]: I0130 17:01:09.979800 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.130203 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.141532 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.166269 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.191041 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.272908 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.273781 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.347503 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.393869 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.428675 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.466672 4875 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.494350 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.546770 4875 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.547049 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://480b1c7ca112d5042c88140a13f4b797cbf983e2f4553a36846136dfb5953c9c" gracePeriod=5 Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.554101 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.583956 4875 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.603063 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.707443 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.828286 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.869300 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 17:01:10 crc kubenswrapper[4875]: I0130 17:01:10.873357 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.015170 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.161967 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.309970 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.483986 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.519416 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.552867 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.554116 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.592664 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.609937 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.708825 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.774833 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.841144 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.876636 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.934488 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.938517 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 17:01:11 crc kubenswrapper[4875]: I0130 17:01:11.957846 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 17:01:12 crc kubenswrapper[4875]: I0130 17:01:12.176543 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 17:01:12 crc kubenswrapper[4875]: I0130 17:01:12.307690 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 17:01:12 crc kubenswrapper[4875]: I0130 17:01:12.316168 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 17:01:12 crc kubenswrapper[4875]: I0130 17:01:12.384749 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 17:01:12 crc kubenswrapper[4875]: I0130 17:01:12.476141 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 17:01:12 crc kubenswrapper[4875]: I0130 17:01:12.489309 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 17:01:12 crc kubenswrapper[4875]: I0130 17:01:12.499338 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 17:01:12 crc kubenswrapper[4875]: I0130 17:01:12.519755 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 17:01:12 crc kubenswrapper[4875]: I0130 17:01:12.789367 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 17:01:12 crc kubenswrapper[4875]: I0130 17:01:12.859673 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 17:01:12 crc kubenswrapper[4875]: I0130 17:01:12.892011 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 17:01:12 crc kubenswrapper[4875]: E0130 17:01:12.908306 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd957892e_e8ab_4817_8690_7cb2613af5af.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:01:13 crc kubenswrapper[4875]: I0130 17:01:13.227303 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 17:01:13 crc kubenswrapper[4875]: I0130 17:01:13.348364 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 17:01:13 crc kubenswrapper[4875]: I0130 17:01:13.423119 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 17:01:13 crc kubenswrapper[4875]: I0130 17:01:13.631131 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 17:01:13 crc kubenswrapper[4875]: I0130 17:01:13.860491 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 17:01:13 crc kubenswrapper[4875]: I0130 17:01:13.967762 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 17:01:13 crc kubenswrapper[4875]: I0130 17:01:13.973625 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 17:01:13 crc kubenswrapper[4875]: I0130 17:01:13.973701 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 17:01:13 crc kubenswrapper[4875]: I0130 17:01:13.975848 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 17:01:13 crc kubenswrapper[4875]: I0130 17:01:13.996316 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 17:01:14 crc kubenswrapper[4875]: I0130 17:01:14.008440 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 17:01:14 crc kubenswrapper[4875]: I0130 17:01:14.020530 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 17:01:14 crc kubenswrapper[4875]: I0130 17:01:14.105951 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 17:01:14 crc kubenswrapper[4875]: I0130 17:01:14.212747 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 17:01:14 crc kubenswrapper[4875]: I0130 17:01:14.353930 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 17:01:14 crc kubenswrapper[4875]: I0130 17:01:14.433192 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 17:01:14 crc kubenswrapper[4875]: I0130 17:01:14.633967 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 17:01:15 crc kubenswrapper[4875]: I0130 17:01:15.223566 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.154191 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.154262 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.154796 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.154851 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.154963 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.154977 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.155040 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.155083 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.155036 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.155072 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.155213 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.155392 4875 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.155410 4875 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.155421 4875 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.155433 4875 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.163841 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.256289 4875 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.403753 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.403803 4875 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="480b1c7ca112d5042c88140a13f4b797cbf983e2f4553a36846136dfb5953c9c" exitCode=137 Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.403841 4875 scope.go:117] "RemoveContainer" containerID="480b1c7ca112d5042c88140a13f4b797cbf983e2f4553a36846136dfb5953c9c" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.403886 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.423070 4875 scope.go:117] "RemoveContainer" containerID="480b1c7ca112d5042c88140a13f4b797cbf983e2f4553a36846136dfb5953c9c" Jan 30 17:01:16 crc kubenswrapper[4875]: E0130 17:01:16.423648 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"480b1c7ca112d5042c88140a13f4b797cbf983e2f4553a36846136dfb5953c9c\": container with ID starting with 480b1c7ca112d5042c88140a13f4b797cbf983e2f4553a36846136dfb5953c9c not found: ID does not exist" containerID="480b1c7ca112d5042c88140a13f4b797cbf983e2f4553a36846136dfb5953c9c" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.423696 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"480b1c7ca112d5042c88140a13f4b797cbf983e2f4553a36846136dfb5953c9c"} err="failed to get container status \"480b1c7ca112d5042c88140a13f4b797cbf983e2f4553a36846136dfb5953c9c\": rpc error: code = NotFound desc = could not find container \"480b1c7ca112d5042c88140a13f4b797cbf983e2f4553a36846136dfb5953c9c\": container with ID starting with 480b1c7ca112d5042c88140a13f4b797cbf983e2f4553a36846136dfb5953c9c not found: ID does not exist" Jan 30 17:01:16 crc kubenswrapper[4875]: I0130 17:01:16.834931 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 17:01:18 crc kubenswrapper[4875]: I0130 17:01:18.141463 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 17:01:18 crc kubenswrapper[4875]: I0130 17:01:18.141715 4875 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 17:01:18 crc kubenswrapper[4875]: I0130 17:01:18.149968 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 17:01:18 crc kubenswrapper[4875]: I0130 17:01:18.150001 4875 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c12438cb-8cd1-470b-a7c8-1309da37bffd" Jan 30 17:01:18 crc kubenswrapper[4875]: I0130 17:01:18.152858 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 17:01:18 crc kubenswrapper[4875]: I0130 17:01:18.152901 4875 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c12438cb-8cd1-470b-a7c8-1309da37bffd" Jan 30 17:01:23 crc kubenswrapper[4875]: E0130 17:01:23.015533 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podd957892e_e8ab_4817_8690_7cb2613af5af.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:01:29 crc kubenswrapper[4875]: I0130 17:01:29.936522 4875 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.090311 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gzr74"] Jan 30 17:01:55 crc kubenswrapper[4875]: E0130 17:01:55.091022 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.091034 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.091119 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.091442 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.100438 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gzr74"] Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.276685 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a4796107-c803-4e07-8050-9398b0c2c929-registry-certificates\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.276735 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4796107-c803-4e07-8050-9398b0c2c929-trusted-ca\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.276756 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a4796107-c803-4e07-8050-9398b0c2c929-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.276884 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4796107-c803-4e07-8050-9398b0c2c929-bound-sa-token\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.276946 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a4796107-c803-4e07-8050-9398b0c2c929-registry-tls\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.277066 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc8kz\" (UniqueName: \"kubernetes.io/projected/a4796107-c803-4e07-8050-9398b0c2c929-kube-api-access-mc8kz\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.277109 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.277204 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a4796107-c803-4e07-8050-9398b0c2c929-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.296286 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.378615 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc8kz\" (UniqueName: \"kubernetes.io/projected/a4796107-c803-4e07-8050-9398b0c2c929-kube-api-access-mc8kz\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.378675 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a4796107-c803-4e07-8050-9398b0c2c929-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.378718 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a4796107-c803-4e07-8050-9398b0c2c929-registry-certificates\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.378743 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4796107-c803-4e07-8050-9398b0c2c929-trusted-ca\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.378764 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a4796107-c803-4e07-8050-9398b0c2c929-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.378813 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4796107-c803-4e07-8050-9398b0c2c929-bound-sa-token\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.378833 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a4796107-c803-4e07-8050-9398b0c2c929-registry-tls\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.379786 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4796107-c803-4e07-8050-9398b0c2c929-trusted-ca\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.379959 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a4796107-c803-4e07-8050-9398b0c2c929-ca-trust-extracted\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.380866 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a4796107-c803-4e07-8050-9398b0c2c929-registry-certificates\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.385666 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a4796107-c803-4e07-8050-9398b0c2c929-installation-pull-secrets\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.386260 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a4796107-c803-4e07-8050-9398b0c2c929-registry-tls\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.395113 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc8kz\" (UniqueName: \"kubernetes.io/projected/a4796107-c803-4e07-8050-9398b0c2c929-kube-api-access-mc8kz\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.397028 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4796107-c803-4e07-8050-9398b0c2c929-bound-sa-token\") pod \"image-registry-66df7c8f76-gzr74\" (UID: \"a4796107-c803-4e07-8050-9398b0c2c929\") " pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.417067 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:55 crc kubenswrapper[4875]: I0130 17:01:55.806728 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-gzr74"] Jan 30 17:01:56 crc kubenswrapper[4875]: I0130 17:01:56.606068 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" event={"ID":"a4796107-c803-4e07-8050-9398b0c2c929","Type":"ContainerStarted","Data":"0622dec0e2b3a372a12f888201674f4c2a7d5353093744310f2f9b2e81b615ae"} Jan 30 17:01:56 crc kubenswrapper[4875]: I0130 17:01:56.606380 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" event={"ID":"a4796107-c803-4e07-8050-9398b0c2c929","Type":"ContainerStarted","Data":"ed7459b228e5013ef202150a10c2c5e9ed1d86e8fd89291d0693c88780f17f55"} Jan 30 17:01:56 crc kubenswrapper[4875]: I0130 17:01:56.606397 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:01:56 crc kubenswrapper[4875]: I0130 17:01:56.621697 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" podStartSLOduration=1.621673529 podStartE2EDuration="1.621673529s" podCreationTimestamp="2026-01-30 17:01:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:01:56.620172452 +0000 UTC m=+327.167535845" watchObservedRunningTime="2026-01-30 17:01:56.621673529 +0000 UTC m=+327.169036942" Jan 30 17:02:15 crc kubenswrapper[4875]: I0130 17:02:15.424396 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-gzr74" Jan 30 17:02:15 crc kubenswrapper[4875]: I0130 17:02:15.482556 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vcs72"] Jan 30 17:02:26 crc kubenswrapper[4875]: I0130 17:02:26.287046 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:02:26 crc kubenswrapper[4875]: I0130 17:02:26.288501 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:02:40 crc kubenswrapper[4875]: I0130 17:02:40.528246 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" podUID="f681b0b0-d68c-44b4-816e-86756d55542c" containerName="registry" containerID="cri-o://792d48544d7c1edfa8852669485026dce813c7f9eab1af517b44bd593a4b6983" gracePeriod=30 Jan 30 17:02:40 crc kubenswrapper[4875]: I0130 17:02:40.837676 4875 generic.go:334] "Generic (PLEG): container finished" podID="f681b0b0-d68c-44b4-816e-86756d55542c" containerID="792d48544d7c1edfa8852669485026dce813c7f9eab1af517b44bd593a4b6983" exitCode=0 Jan 30 17:02:40 crc kubenswrapper[4875]: I0130 17:02:40.837736 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" event={"ID":"f681b0b0-d68c-44b4-816e-86756d55542c","Type":"ContainerDied","Data":"792d48544d7c1edfa8852669485026dce813c7f9eab1af517b44bd593a4b6983"} Jan 30 17:02:40 crc kubenswrapper[4875]: I0130 17:02:40.875423 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.030739 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-registry-tls\") pod \"f681b0b0-d68c-44b4-816e-86756d55542c\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.030795 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f681b0b0-d68c-44b4-816e-86756d55542c-trusted-ca\") pod \"f681b0b0-d68c-44b4-816e-86756d55542c\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.030927 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f681b0b0-d68c-44b4-816e-86756d55542c-registry-certificates\") pod \"f681b0b0-d68c-44b4-816e-86756d55542c\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.030960 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-bound-sa-token\") pod \"f681b0b0-d68c-44b4-816e-86756d55542c\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.031123 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"f681b0b0-d68c-44b4-816e-86756d55542c\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.031175 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz2mb\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-kube-api-access-qz2mb\") pod \"f681b0b0-d68c-44b4-816e-86756d55542c\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.031205 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f681b0b0-d68c-44b4-816e-86756d55542c-ca-trust-extracted\") pod \"f681b0b0-d68c-44b4-816e-86756d55542c\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.031245 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f681b0b0-d68c-44b4-816e-86756d55542c-installation-pull-secrets\") pod \"f681b0b0-d68c-44b4-816e-86756d55542c\" (UID: \"f681b0b0-d68c-44b4-816e-86756d55542c\") " Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.031907 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f681b0b0-d68c-44b4-816e-86756d55542c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f681b0b0-d68c-44b4-816e-86756d55542c" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.031950 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f681b0b0-d68c-44b4-816e-86756d55542c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f681b0b0-d68c-44b4-816e-86756d55542c" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.036820 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f681b0b0-d68c-44b4-816e-86756d55542c" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.038278 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f681b0b0-d68c-44b4-816e-86756d55542c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f681b0b0-d68c-44b4-816e-86756d55542c" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.040979 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-kube-api-access-qz2mb" (OuterVolumeSpecName: "kube-api-access-qz2mb") pod "f681b0b0-d68c-44b4-816e-86756d55542c" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c"). InnerVolumeSpecName "kube-api-access-qz2mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.041096 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f681b0b0-d68c-44b4-816e-86756d55542c" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.042798 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "f681b0b0-d68c-44b4-816e-86756d55542c" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.062964 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f681b0b0-d68c-44b4-816e-86756d55542c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f681b0b0-d68c-44b4-816e-86756d55542c" (UID: "f681b0b0-d68c-44b4-816e-86756d55542c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.133347 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz2mb\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-kube-api-access-qz2mb\") on node \"crc\" DevicePath \"\"" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.133403 4875 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f681b0b0-d68c-44b4-816e-86756d55542c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.133421 4875 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f681b0b0-d68c-44b4-816e-86756d55542c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.133440 4875 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.133458 4875 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f681b0b0-d68c-44b4-816e-86756d55542c-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.133474 4875 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f681b0b0-d68c-44b4-816e-86756d55542c-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.133491 4875 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f681b0b0-d68c-44b4-816e-86756d55542c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.846157 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" event={"ID":"f681b0b0-d68c-44b4-816e-86756d55542c","Type":"ContainerDied","Data":"c0c6e0139fc65723bc53a0f18ad9bea6d6cf90a56b6b3727432006100bdfae67"} Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.846493 4875 scope.go:117] "RemoveContainer" containerID="792d48544d7c1edfa8852669485026dce813c7f9eab1af517b44bd593a4b6983" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.846752 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vcs72" Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.882819 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vcs72"] Jan 30 17:02:41 crc kubenswrapper[4875]: I0130 17:02:41.886161 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vcs72"] Jan 30 17:02:42 crc kubenswrapper[4875]: I0130 17:02:42.142578 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f681b0b0-d68c-44b4-816e-86756d55542c" path="/var/lib/kubelet/pods/f681b0b0-d68c-44b4-816e-86756d55542c/volumes" Jan 30 17:02:56 crc kubenswrapper[4875]: I0130 17:02:56.288118 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:02:56 crc kubenswrapper[4875]: I0130 17:02:56.288901 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:03:26 crc kubenswrapper[4875]: I0130 17:03:26.286827 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:03:26 crc kubenswrapper[4875]: I0130 17:03:26.287368 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:03:26 crc kubenswrapper[4875]: I0130 17:03:26.287415 4875 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 17:03:26 crc kubenswrapper[4875]: I0130 17:03:26.287957 4875 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12371742fd50f0efbcda52c6975077df5a1e419df1f9382a50ead1f6472b0960"} pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:03:26 crc kubenswrapper[4875]: I0130 17:03:26.288012 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" containerID="cri-o://12371742fd50f0efbcda52c6975077df5a1e419df1f9382a50ead1f6472b0960" gracePeriod=600 Jan 30 17:03:27 crc kubenswrapper[4875]: I0130 17:03:27.330866 4875 generic.go:334] "Generic (PLEG): container finished" podID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerID="12371742fd50f0efbcda52c6975077df5a1e419df1f9382a50ead1f6472b0960" exitCode=0 Jan 30 17:03:27 crc kubenswrapper[4875]: I0130 17:03:27.330922 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerDied","Data":"12371742fd50f0efbcda52c6975077df5a1e419df1f9382a50ead1f6472b0960"} Jan 30 17:03:27 crc kubenswrapper[4875]: I0130 17:03:27.331279 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerStarted","Data":"ea4fc173ca1c7737282f76b497b93072de498c51c422171abc059436c0e39c75"} Jan 30 17:03:27 crc kubenswrapper[4875]: I0130 17:03:27.331297 4875 scope.go:117] "RemoveContainer" containerID="5e9e8a7430cc446fc690bf5cab0c7399ff48a4d2e9d4492c448ea520f6270c69" Jan 30 17:05:26 crc kubenswrapper[4875]: I0130 17:05:26.287840 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:05:26 crc kubenswrapper[4875]: I0130 17:05:26.288352 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.173858 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb"] Jan 30 17:05:50 crc kubenswrapper[4875]: E0130 17:05:50.174615 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f681b0b0-d68c-44b4-816e-86756d55542c" containerName="registry" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.174630 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f681b0b0-d68c-44b4-816e-86756d55542c" containerName="registry" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.174761 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f681b0b0-d68c-44b4-816e-86756d55542c" containerName="registry" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.175612 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.178004 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.182762 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb"] Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.273058 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0323f50d-c1fd-466c-ab03-020895b83c84-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb\" (UID: \"0323f50d-c1fd-466c-ab03-020895b83c84\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.273230 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0323f50d-c1fd-466c-ab03-020895b83c84-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb\" (UID: \"0323f50d-c1fd-466c-ab03-020895b83c84\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.273268 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5wcg\" (UniqueName: \"kubernetes.io/projected/0323f50d-c1fd-466c-ab03-020895b83c84-kube-api-access-l5wcg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb\" (UID: \"0323f50d-c1fd-466c-ab03-020895b83c84\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.374105 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0323f50d-c1fd-466c-ab03-020895b83c84-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb\" (UID: \"0323f50d-c1fd-466c-ab03-020895b83c84\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.374154 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5wcg\" (UniqueName: \"kubernetes.io/projected/0323f50d-c1fd-466c-ab03-020895b83c84-kube-api-access-l5wcg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb\" (UID: \"0323f50d-c1fd-466c-ab03-020895b83c84\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.374183 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0323f50d-c1fd-466c-ab03-020895b83c84-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb\" (UID: \"0323f50d-c1fd-466c-ab03-020895b83c84\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.374630 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0323f50d-c1fd-466c-ab03-020895b83c84-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb\" (UID: \"0323f50d-c1fd-466c-ab03-020895b83c84\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.374839 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0323f50d-c1fd-466c-ab03-020895b83c84-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb\" (UID: \"0323f50d-c1fd-466c-ab03-020895b83c84\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.405511 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5wcg\" (UniqueName: \"kubernetes.io/projected/0323f50d-c1fd-466c-ab03-020895b83c84-kube-api-access-l5wcg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb\" (UID: \"0323f50d-c1fd-466c-ab03-020895b83c84\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.495431 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" Jan 30 17:05:50 crc kubenswrapper[4875]: I0130 17:05:50.889199 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb"] Jan 30 17:05:51 crc kubenswrapper[4875]: I0130 17:05:51.079627 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" event={"ID":"0323f50d-c1fd-466c-ab03-020895b83c84","Type":"ContainerStarted","Data":"bb67416b07a54c3214a02c459e84480aef59526fe342c2c2859e2d79be5b6c28"} Jan 30 17:05:51 crc kubenswrapper[4875]: I0130 17:05:51.079946 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" event={"ID":"0323f50d-c1fd-466c-ab03-020895b83c84","Type":"ContainerStarted","Data":"1760df00ecdd92db25d822d4fe62cc2434a7f48e2d0caeff43e1523209f28d05"} Jan 30 17:05:52 crc kubenswrapper[4875]: I0130 17:05:52.087376 4875 generic.go:334] "Generic (PLEG): container finished" podID="0323f50d-c1fd-466c-ab03-020895b83c84" containerID="bb67416b07a54c3214a02c459e84480aef59526fe342c2c2859e2d79be5b6c28" exitCode=0 Jan 30 17:05:52 crc kubenswrapper[4875]: I0130 17:05:52.087425 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" event={"ID":"0323f50d-c1fd-466c-ab03-020895b83c84","Type":"ContainerDied","Data":"bb67416b07a54c3214a02c459e84480aef59526fe342c2c2859e2d79be5b6c28"} Jan 30 17:05:52 crc kubenswrapper[4875]: I0130 17:05:52.088691 4875 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:05:54 crc kubenswrapper[4875]: I0130 17:05:54.099981 4875 generic.go:334] "Generic (PLEG): container finished" podID="0323f50d-c1fd-466c-ab03-020895b83c84" containerID="0a0cf3df56d36d14c04bce8aa93e1be9fd44e06ee65685bc700a76eca17b5a65" exitCode=0 Jan 30 17:05:54 crc kubenswrapper[4875]: I0130 17:05:54.100433 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" event={"ID":"0323f50d-c1fd-466c-ab03-020895b83c84","Type":"ContainerDied","Data":"0a0cf3df56d36d14c04bce8aa93e1be9fd44e06ee65685bc700a76eca17b5a65"} Jan 30 17:05:55 crc kubenswrapper[4875]: I0130 17:05:55.108401 4875 generic.go:334] "Generic (PLEG): container finished" podID="0323f50d-c1fd-466c-ab03-020895b83c84" containerID="02a4219339a4fe1725d35093c542f8adc084d72eed25d56d11d21429260ae5c8" exitCode=0 Jan 30 17:05:55 crc kubenswrapper[4875]: I0130 17:05:55.108444 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" event={"ID":"0323f50d-c1fd-466c-ab03-020895b83c84","Type":"ContainerDied","Data":"02a4219339a4fe1725d35093c542f8adc084d72eed25d56d11d21429260ae5c8"} Jan 30 17:05:56 crc kubenswrapper[4875]: I0130 17:05:56.287450 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:05:56 crc kubenswrapper[4875]: I0130 17:05:56.287844 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:05:56 crc kubenswrapper[4875]: I0130 17:05:56.322882 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" Jan 30 17:05:56 crc kubenswrapper[4875]: I0130 17:05:56.454727 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0323f50d-c1fd-466c-ab03-020895b83c84-util\") pod \"0323f50d-c1fd-466c-ab03-020895b83c84\" (UID: \"0323f50d-c1fd-466c-ab03-020895b83c84\") " Jan 30 17:05:56 crc kubenswrapper[4875]: I0130 17:05:56.454863 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0323f50d-c1fd-466c-ab03-020895b83c84-bundle\") pod \"0323f50d-c1fd-466c-ab03-020895b83c84\" (UID: \"0323f50d-c1fd-466c-ab03-020895b83c84\") " Jan 30 17:05:56 crc kubenswrapper[4875]: I0130 17:05:56.454887 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5wcg\" (UniqueName: \"kubernetes.io/projected/0323f50d-c1fd-466c-ab03-020895b83c84-kube-api-access-l5wcg\") pod \"0323f50d-c1fd-466c-ab03-020895b83c84\" (UID: \"0323f50d-c1fd-466c-ab03-020895b83c84\") " Jan 30 17:05:56 crc kubenswrapper[4875]: I0130 17:05:56.455336 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0323f50d-c1fd-466c-ab03-020895b83c84-bundle" (OuterVolumeSpecName: "bundle") pod "0323f50d-c1fd-466c-ab03-020895b83c84" (UID: "0323f50d-c1fd-466c-ab03-020895b83c84"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:05:56 crc kubenswrapper[4875]: I0130 17:05:56.460763 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0323f50d-c1fd-466c-ab03-020895b83c84-kube-api-access-l5wcg" (OuterVolumeSpecName: "kube-api-access-l5wcg") pod "0323f50d-c1fd-466c-ab03-020895b83c84" (UID: "0323f50d-c1fd-466c-ab03-020895b83c84"). InnerVolumeSpecName "kube-api-access-l5wcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:05:56 crc kubenswrapper[4875]: I0130 17:05:56.556183 4875 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0323f50d-c1fd-466c-ab03-020895b83c84-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:05:56 crc kubenswrapper[4875]: I0130 17:05:56.556236 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5wcg\" (UniqueName: \"kubernetes.io/projected/0323f50d-c1fd-466c-ab03-020895b83c84-kube-api-access-l5wcg\") on node \"crc\" DevicePath \"\"" Jan 30 17:05:56 crc kubenswrapper[4875]: I0130 17:05:56.777513 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0323f50d-c1fd-466c-ab03-020895b83c84-util" (OuterVolumeSpecName: "util") pod "0323f50d-c1fd-466c-ab03-020895b83c84" (UID: "0323f50d-c1fd-466c-ab03-020895b83c84"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:05:56 crc kubenswrapper[4875]: I0130 17:05:56.861151 4875 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0323f50d-c1fd-466c-ab03-020895b83c84-util\") on node \"crc\" DevicePath \"\"" Jan 30 17:05:57 crc kubenswrapper[4875]: I0130 17:05:57.121391 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" event={"ID":"0323f50d-c1fd-466c-ab03-020895b83c84","Type":"ContainerDied","Data":"1760df00ecdd92db25d822d4fe62cc2434a7f48e2d0caeff43e1523209f28d05"} Jan 30 17:05:57 crc kubenswrapper[4875]: I0130 17:05:57.121654 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1760df00ecdd92db25d822d4fe62cc2434a7f48e2d0caeff43e1523209f28d05" Jan 30 17:05:57 crc kubenswrapper[4875]: I0130 17:05:57.121453 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb" Jan 30 17:06:00 crc kubenswrapper[4875]: I0130 17:06:00.227558 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mps6c"] Jan 30 17:06:00 crc kubenswrapper[4875]: I0130 17:06:00.228009 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovn-controller" containerID="cri-o://27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7" gracePeriod=30 Jan 30 17:06:00 crc kubenswrapper[4875]: I0130 17:06:00.228107 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88" gracePeriod=30 Jan 30 17:06:00 crc kubenswrapper[4875]: I0130 17:06:00.228163 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovn-acl-logging" containerID="cri-o://ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc" gracePeriod=30 Jan 30 17:06:00 crc kubenswrapper[4875]: I0130 17:06:00.228236 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="sbdb" containerID="cri-o://dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6" gracePeriod=30 Jan 30 17:06:00 crc kubenswrapper[4875]: I0130 17:06:00.228272 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="kube-rbac-proxy-node" containerID="cri-o://a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6" gracePeriod=30 Jan 30 17:06:00 crc kubenswrapper[4875]: I0130 17:06:00.228146 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="northd" containerID="cri-o://2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f" gracePeriod=30 Jan 30 17:06:00 crc kubenswrapper[4875]: I0130 17:06:00.228337 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="nbdb" containerID="cri-o://48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa" gracePeriod=30 Jan 30 17:06:00 crc kubenswrapper[4875]: I0130 17:06:00.272218 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" containerID="cri-o://17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260" gracePeriod=30 Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.067180 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/3.log" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.071452 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovn-acl-logging/0.log" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.071921 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovn-controller/0.log" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.072319 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122207 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-24kzl"] Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122412 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="nbdb" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122424 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="nbdb" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122433 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="sbdb" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122438 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="sbdb" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122448 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122454 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122462 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="northd" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122467 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="northd" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122476 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0323f50d-c1fd-466c-ab03-020895b83c84" containerName="extract" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122481 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="0323f50d-c1fd-466c-ab03-020895b83c84" containerName="extract" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122510 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0323f50d-c1fd-466c-ab03-020895b83c84" containerName="pull" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122519 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="0323f50d-c1fd-466c-ab03-020895b83c84" containerName="pull" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122529 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122534 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122542 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122548 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122556 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="kubecfg-setup" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122563 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="kubecfg-setup" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122571 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovn-acl-logging" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122576 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovn-acl-logging" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122602 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovn-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122609 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovn-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122617 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0323f50d-c1fd-466c-ab03-020895b83c84" containerName="util" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122623 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="0323f50d-c1fd-466c-ab03-020895b83c84" containerName="util" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122633 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="kube-rbac-proxy-node" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122639 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="kube-rbac-proxy-node" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122648 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122654 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122744 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="kube-rbac-proxy-node" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122754 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122763 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122771 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="0323f50d-c1fd-466c-ab03-020895b83c84" containerName="extract" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122777 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovn-acl-logging" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122785 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="sbdb" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122793 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122801 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovn-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122811 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="northd" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122819 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122836 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="nbdb" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122917 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122923 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.122930 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.122936 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.123037 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.123046 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerName="ovnkube-controller" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.124453 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.142274 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ck4hq_562b7bc8-0631-497c-9b8a-05af82dcfff9/kube-multus/2.log" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.142812 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ck4hq_562b7bc8-0631-497c-9b8a-05af82dcfff9/kube-multus/1.log" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.142855 4875 generic.go:334] "Generic (PLEG): container finished" podID="562b7bc8-0631-497c-9b8a-05af82dcfff9" containerID="62c943c842d51e922bb22248b6399f5410f8500f6276b2f741a1e5b35ad9a256" exitCode=2 Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.142908 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ck4hq" event={"ID":"562b7bc8-0631-497c-9b8a-05af82dcfff9","Type":"ContainerDied","Data":"62c943c842d51e922bb22248b6399f5410f8500f6276b2f741a1e5b35ad9a256"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.142943 4875 scope.go:117] "RemoveContainer" containerID="3b26a1f922e0214d976c84feb63e7ad8957d0d356ff5287eb78b1a6eaf4564ac" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.143336 4875 scope.go:117] "RemoveContainer" containerID="62c943c842d51e922bb22248b6399f5410f8500f6276b2f741a1e5b35ad9a256" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.143515 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ck4hq_openshift-multus(562b7bc8-0631-497c-9b8a-05af82dcfff9)\"" pod="openshift-multus/multus-ck4hq" podUID="562b7bc8-0631-497c-9b8a-05af82dcfff9" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.145695 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovnkube-controller/3.log" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.147825 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovn-acl-logging/0.log" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.149789 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mps6c_85cf29f6-017d-475a-b63c-cd1cab3c8132/ovn-controller/0.log" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150235 4875 generic.go:334] "Generic (PLEG): container finished" podID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerID="17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260" exitCode=0 Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150262 4875 generic.go:334] "Generic (PLEG): container finished" podID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerID="dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6" exitCode=0 Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150274 4875 generic.go:334] "Generic (PLEG): container finished" podID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerID="48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa" exitCode=0 Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150289 4875 generic.go:334] "Generic (PLEG): container finished" podID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerID="2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f" exitCode=0 Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150297 4875 generic.go:334] "Generic (PLEG): container finished" podID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerID="2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88" exitCode=0 Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150305 4875 generic.go:334] "Generic (PLEG): container finished" podID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerID="a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6" exitCode=0 Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150313 4875 generic.go:334] "Generic (PLEG): container finished" podID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerID="ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc" exitCode=143 Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150322 4875 generic.go:334] "Generic (PLEG): container finished" podID="85cf29f6-017d-475a-b63c-cd1cab3c8132" containerID="27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7" exitCode=143 Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150318 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150359 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150371 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150381 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150392 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150403 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150331 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150416 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150427 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150435 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150440 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150446 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150451 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150456 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150461 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150466 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150472 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150507 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150515 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150521 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150526 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150531 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150536 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150541 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150546 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150551 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150556 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150561 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150567 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150577 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150604 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150610 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150616 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150622 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150628 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150633 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150638 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150644 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150649 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150657 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mps6c" event={"ID":"85cf29f6-017d-475a-b63c-cd1cab3c8132","Type":"ContainerDied","Data":"fb31988c8c373b3caffe3d25e35a9a4e043b0809bc35df330374eb0cf72cb0af"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150665 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150671 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150676 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150682 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150687 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150692 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150697 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150702 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150706 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.150712 4875 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918"} Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.176004 4875 scope.go:117] "RemoveContainer" containerID="17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.192303 4875 scope.go:117] "RemoveContainer" containerID="41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.213374 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-env-overrides\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.213612 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-kubelet\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.213704 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-run-ovn-kubernetes\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.213781 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbb6z\" (UniqueName: \"kubernetes.io/projected/85cf29f6-017d-475a-b63c-cd1cab3c8132-kube-api-access-fbb6z\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.213869 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-cni-bin\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214008 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovnkube-config\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.213800 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214095 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-slash\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.213832 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.213849 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.213952 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214198 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-systemd\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214254 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-cni-netd\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214290 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-var-lib-cni-networks-ovn-kubernetes\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214317 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-var-lib-openvswitch\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214339 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-systemd-units\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214375 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovn-node-metrics-cert\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214398 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovnkube-script-lib\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214427 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-run-netns\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214488 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-node-log\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214494 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214513 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-ovn\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214533 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-log-socket\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214570 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-openvswitch\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214613 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-etc-openvswitch\") pod \"85cf29f6-017d-475a-b63c-cd1cab3c8132\" (UID: \"85cf29f6-017d-475a-b63c-cd1cab3c8132\") " Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214804 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-ovnkube-config\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214861 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-log-socket\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214907 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214939 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-kubelet\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214958 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-slash\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215003 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-cni-netd\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215028 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-run-netns\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215052 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-node-log\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215076 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-ovn-node-metrics-cert\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214540 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215100 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-log-socket" (OuterVolumeSpecName: "log-socket") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215142 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215168 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215222 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-systemd-units\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214822 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-slash" (OuterVolumeSpecName: "host-slash") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214558 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214576 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.214612 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215017 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215065 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215389 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215404 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-node-log" (OuterVolumeSpecName: "node-log") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215555 4875 scope.go:117] "RemoveContainer" containerID="dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215678 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-cni-bin\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215769 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-ovnkube-script-lib\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215850 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-run-ovn\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.215938 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-run-ovn-kubernetes\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.216020 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-run-systemd\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.216094 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7mkq\" (UniqueName: \"kubernetes.io/projected/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-kube-api-access-z7mkq\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.216169 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-var-lib-openvswitch\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.216241 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-etc-openvswitch\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.216345 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-run-openvswitch\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.216433 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-env-overrides\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.216546 4875 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.216654 4875 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.216722 4875 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.216785 4875 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.216845 4875 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.216910 4875 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.216967 4875 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.217030 4875 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.217093 4875 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.217156 4875 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.217218 4875 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.217283 4875 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.217349 4875 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.217421 4875 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.217491 4875 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.217559 4875 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.217657 4875 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.220913 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85cf29f6-017d-475a-b63c-cd1cab3c8132-kube-api-access-fbb6z" (OuterVolumeSpecName: "kube-api-access-fbb6z") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "kube-api-access-fbb6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.221184 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.231306 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "85cf29f6-017d-475a-b63c-cd1cab3c8132" (UID: "85cf29f6-017d-475a-b63c-cd1cab3c8132"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.232247 4875 scope.go:117] "RemoveContainer" containerID="48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.244160 4875 scope.go:117] "RemoveContainer" containerID="2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.256804 4875 scope.go:117] "RemoveContainer" containerID="2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.270364 4875 scope.go:117] "RemoveContainer" containerID="a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.281671 4875 scope.go:117] "RemoveContainer" containerID="ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.294248 4875 scope.go:117] "RemoveContainer" containerID="27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.314953 4875 scope.go:117] "RemoveContainer" containerID="0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.322684 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-cni-netd\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.322794 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-run-netns\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.322821 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-node-log\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.322844 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-ovn-node-metrics-cert\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.322895 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-systemd-units\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.322968 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-cni-bin\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.322991 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-ovnkube-script-lib\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323020 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-run-ovn-kubernetes\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323045 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-run-ovn\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323075 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-run-systemd\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323105 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7mkq\" (UniqueName: \"kubernetes.io/projected/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-kube-api-access-z7mkq\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323171 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-var-lib-openvswitch\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323199 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-etc-openvswitch\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323251 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-run-openvswitch\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323278 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-env-overrides\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323312 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-ovnkube-config\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323350 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-log-socket\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323401 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-kubelet\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323416 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-slash\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323432 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323522 4875 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/85cf29f6-017d-475a-b63c-cd1cab3c8132-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323535 4875 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/85cf29f6-017d-475a-b63c-cd1cab3c8132-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.323551 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbb6z\" (UniqueName: \"kubernetes.io/projected/85cf29f6-017d-475a-b63c-cd1cab3c8132-kube-api-access-fbb6z\") on node \"crc\" DevicePath \"\"" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.324038 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-cni-netd\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.324083 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-run-netns\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.324117 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-node-log\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.324694 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-var-lib-openvswitch\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.324790 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-cni-bin\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.324796 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-run-ovn-kubernetes\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.324887 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-kubelet\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.324966 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-etc-openvswitch\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.324904 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-systemd-units\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.325134 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-log-socket\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.325151 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-slash\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.325302 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.325381 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-run-systemd\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.325398 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-run-openvswitch\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.325405 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-ovnkube-config\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.325614 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-ovnkube-script-lib\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.325854 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-env-overrides\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.326165 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-run-ovn\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.329786 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-ovn-node-metrics-cert\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.332411 4875 scope.go:117] "RemoveContainer" containerID="17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.332844 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260\": container with ID starting with 17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260 not found: ID does not exist" containerID="17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.332900 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260"} err="failed to get container status \"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260\": rpc error: code = NotFound desc = could not find container \"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260\": container with ID starting with 17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.332933 4875 scope.go:117] "RemoveContainer" containerID="41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.333396 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\": container with ID starting with 41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c not found: ID does not exist" containerID="41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.333433 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c"} err="failed to get container status \"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\": rpc error: code = NotFound desc = could not find container \"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\": container with ID starting with 41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.333461 4875 scope.go:117] "RemoveContainer" containerID="dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.333743 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\": container with ID starting with dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6 not found: ID does not exist" containerID="dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.333766 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6"} err="failed to get container status \"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\": rpc error: code = NotFound desc = could not find container \"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\": container with ID starting with dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.333781 4875 scope.go:117] "RemoveContainer" containerID="48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.334012 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\": container with ID starting with 48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa not found: ID does not exist" containerID="48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.334040 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa"} err="failed to get container status \"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\": rpc error: code = NotFound desc = could not find container \"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\": container with ID starting with 48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.334059 4875 scope.go:117] "RemoveContainer" containerID="2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.334505 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\": container with ID starting with 2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f not found: ID does not exist" containerID="2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.334530 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f"} err="failed to get container status \"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\": rpc error: code = NotFound desc = could not find container \"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\": container with ID starting with 2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.334546 4875 scope.go:117] "RemoveContainer" containerID="2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.334843 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\": container with ID starting with 2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88 not found: ID does not exist" containerID="2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.334961 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88"} err="failed to get container status \"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\": rpc error: code = NotFound desc = could not find container \"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\": container with ID starting with 2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.335074 4875 scope.go:117] "RemoveContainer" containerID="a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.339171 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\": container with ID starting with a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6 not found: ID does not exist" containerID="a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.339212 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6"} err="failed to get container status \"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\": rpc error: code = NotFound desc = could not find container \"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\": container with ID starting with a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.339237 4875 scope.go:117] "RemoveContainer" containerID="ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.339615 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\": container with ID starting with ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc not found: ID does not exist" containerID="ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.339640 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc"} err="failed to get container status \"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\": rpc error: code = NotFound desc = could not find container \"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\": container with ID starting with ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.339654 4875 scope.go:117] "RemoveContainer" containerID="27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.340493 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\": container with ID starting with 27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7 not found: ID does not exist" containerID="27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.340520 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7"} err="failed to get container status \"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\": rpc error: code = NotFound desc = could not find container \"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\": container with ID starting with 27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.340538 4875 scope.go:117] "RemoveContainer" containerID="0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918" Jan 30 17:06:01 crc kubenswrapper[4875]: E0130 17:06:01.340967 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\": container with ID starting with 0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918 not found: ID does not exist" containerID="0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.340989 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918"} err="failed to get container status \"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\": rpc error: code = NotFound desc = could not find container \"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\": container with ID starting with 0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.341002 4875 scope.go:117] "RemoveContainer" containerID="17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.341160 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260"} err="failed to get container status \"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260\": rpc error: code = NotFound desc = could not find container \"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260\": container with ID starting with 17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.341173 4875 scope.go:117] "RemoveContainer" containerID="41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.341359 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c"} err="failed to get container status \"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\": rpc error: code = NotFound desc = could not find container \"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\": container with ID starting with 41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.341378 4875 scope.go:117] "RemoveContainer" containerID="dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.341542 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6"} err="failed to get container status \"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\": rpc error: code = NotFound desc = could not find container \"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\": container with ID starting with dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.341559 4875 scope.go:117] "RemoveContainer" containerID="48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.341798 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa"} err="failed to get container status \"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\": rpc error: code = NotFound desc = could not find container \"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\": container with ID starting with 48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.341815 4875 scope.go:117] "RemoveContainer" containerID="2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.341997 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f"} err="failed to get container status \"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\": rpc error: code = NotFound desc = could not find container \"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\": container with ID starting with 2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.342017 4875 scope.go:117] "RemoveContainer" containerID="2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.342204 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88"} err="failed to get container status \"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\": rpc error: code = NotFound desc = could not find container \"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\": container with ID starting with 2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.342223 4875 scope.go:117] "RemoveContainer" containerID="a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.342692 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6"} err="failed to get container status \"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\": rpc error: code = NotFound desc = could not find container \"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\": container with ID starting with a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.342711 4875 scope.go:117] "RemoveContainer" containerID="ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.342897 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc"} err="failed to get container status \"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\": rpc error: code = NotFound desc = could not find container \"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\": container with ID starting with ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.342914 4875 scope.go:117] "RemoveContainer" containerID="27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.343058 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7"} err="failed to get container status \"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\": rpc error: code = NotFound desc = could not find container \"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\": container with ID starting with 27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.343076 4875 scope.go:117] "RemoveContainer" containerID="0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.343197 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918"} err="failed to get container status \"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\": rpc error: code = NotFound desc = could not find container \"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\": container with ID starting with 0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.343216 4875 scope.go:117] "RemoveContainer" containerID="17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.343355 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260"} err="failed to get container status \"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260\": rpc error: code = NotFound desc = could not find container \"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260\": container with ID starting with 17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.343373 4875 scope.go:117] "RemoveContainer" containerID="41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.343503 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c"} err="failed to get container status \"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\": rpc error: code = NotFound desc = could not find container \"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\": container with ID starting with 41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.343518 4875 scope.go:117] "RemoveContainer" containerID="dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.343762 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6"} err="failed to get container status \"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\": rpc error: code = NotFound desc = could not find container \"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\": container with ID starting with dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.343866 4875 scope.go:117] "RemoveContainer" containerID="48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.344165 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa"} err="failed to get container status \"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\": rpc error: code = NotFound desc = could not find container \"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\": container with ID starting with 48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.344235 4875 scope.go:117] "RemoveContainer" containerID="2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.344410 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7mkq\" (UniqueName: \"kubernetes.io/projected/51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f-kube-api-access-z7mkq\") pod \"ovnkube-node-24kzl\" (UID: \"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f\") " pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.344432 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f"} err="failed to get container status \"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\": rpc error: code = NotFound desc = could not find container \"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\": container with ID starting with 2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.344447 4875 scope.go:117] "RemoveContainer" containerID="2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.344607 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88"} err="failed to get container status \"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\": rpc error: code = NotFound desc = could not find container \"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\": container with ID starting with 2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.344623 4875 scope.go:117] "RemoveContainer" containerID="a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.344784 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6"} err="failed to get container status \"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\": rpc error: code = NotFound desc = could not find container \"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\": container with ID starting with a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.344799 4875 scope.go:117] "RemoveContainer" containerID="ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.344942 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc"} err="failed to get container status \"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\": rpc error: code = NotFound desc = could not find container \"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\": container with ID starting with ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.344958 4875 scope.go:117] "RemoveContainer" containerID="27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.345282 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7"} err="failed to get container status \"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\": rpc error: code = NotFound desc = could not find container \"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\": container with ID starting with 27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.345303 4875 scope.go:117] "RemoveContainer" containerID="0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.345494 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918"} err="failed to get container status \"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\": rpc error: code = NotFound desc = could not find container \"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\": container with ID starting with 0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.345510 4875 scope.go:117] "RemoveContainer" containerID="17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.345695 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260"} err="failed to get container status \"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260\": rpc error: code = NotFound desc = could not find container \"17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260\": container with ID starting with 17f2a67f37ba66dd6ebc54288e491b28a5f332ad2570d5f18a0692e7a8772260 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.345711 4875 scope.go:117] "RemoveContainer" containerID="41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.345841 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c"} err="failed to get container status \"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\": rpc error: code = NotFound desc = could not find container \"41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c\": container with ID starting with 41b068d7dce24e063f88b24d12027fc181be585518eba9453c6c9891aa75150c not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.345856 4875 scope.go:117] "RemoveContainer" containerID="dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.345984 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6"} err="failed to get container status \"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\": rpc error: code = NotFound desc = could not find container \"dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6\": container with ID starting with dc03fe4019f7c4ea99075fdd63b787f0f6869f5da4ca41fc6c97c706b17f94b6 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.345998 4875 scope.go:117] "RemoveContainer" containerID="48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.346120 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa"} err="failed to get container status \"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\": rpc error: code = NotFound desc = could not find container \"48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa\": container with ID starting with 48be89182817997e1665d526de66e9aa93e684b788675d5b64a9eabd9e66a6aa not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.346134 4875 scope.go:117] "RemoveContainer" containerID="2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.346259 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f"} err="failed to get container status \"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\": rpc error: code = NotFound desc = could not find container \"2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f\": container with ID starting with 2115489427d31680677d597d20260da1ad04c00a8840f206d053b2de28f6838f not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.346274 4875 scope.go:117] "RemoveContainer" containerID="2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.346444 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88"} err="failed to get container status \"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\": rpc error: code = NotFound desc = could not find container \"2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88\": container with ID starting with 2d69b869a955e6fa222c67c292a5e4dce4f82a5fd50c73c268ebeb8b2c40aa88 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.346462 4875 scope.go:117] "RemoveContainer" containerID="a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.346616 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6"} err="failed to get container status \"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\": rpc error: code = NotFound desc = could not find container \"a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6\": container with ID starting with a6efe434ac2f3712c103f2d9cafeaad02a70d3fb3d0d9f93245649d553c898d6 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.346631 4875 scope.go:117] "RemoveContainer" containerID="ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.346772 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc"} err="failed to get container status \"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\": rpc error: code = NotFound desc = could not find container \"ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc\": container with ID starting with ba36b25ade27c707beb24e385c1f24b662d73897042987f8ded50cfa269fd5cc not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.346784 4875 scope.go:117] "RemoveContainer" containerID="27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.346933 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7"} err="failed to get container status \"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\": rpc error: code = NotFound desc = could not find container \"27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7\": container with ID starting with 27e8d19997c89720a4ffd327965ccb98a2ee7e2e8bc5267c17d9525f499204e7 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.346952 4875 scope.go:117] "RemoveContainer" containerID="0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.347095 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918"} err="failed to get container status \"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\": rpc error: code = NotFound desc = could not find container \"0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918\": container with ID starting with 0e0140f7af440d4c216a4d91ad004cebbf260e9c4d0037f588380bb5cb4b0918 not found: ID does not exist" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.438427 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.553966 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mps6c"] Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.561950 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mps6c"] Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.707500 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q6zbs"] Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.708451 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.710358 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-chnvm" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.714205 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.714261 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.830059 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx4x8\" (UniqueName: \"kubernetes.io/projected/f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f-kube-api-access-fx4x8\") pod \"nmstate-operator-646758c888-q6zbs\" (UID: \"f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f\") " pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.931639 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx4x8\" (UniqueName: \"kubernetes.io/projected/f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f-kube-api-access-fx4x8\") pod \"nmstate-operator-646758c888-q6zbs\" (UID: \"f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f\") " pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:01 crc kubenswrapper[4875]: I0130 17:06:01.951144 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx4x8\" (UniqueName: \"kubernetes.io/projected/f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f-kube-api-access-fx4x8\") pod \"nmstate-operator-646758c888-q6zbs\" (UID: \"f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f\") " pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:02 crc kubenswrapper[4875]: I0130 17:06:02.023758 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:02 crc kubenswrapper[4875]: E0130 17:06:02.043983 4875 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-q6zbs_openshift-nmstate_f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f_0(22cc34dcbfe36bfd607beee0aa5555aefd334924fccd8c0f78ce32ded188be56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 17:06:02 crc kubenswrapper[4875]: E0130 17:06:02.044069 4875 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-q6zbs_openshift-nmstate_f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f_0(22cc34dcbfe36bfd607beee0aa5555aefd334924fccd8c0f78ce32ded188be56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:02 crc kubenswrapper[4875]: E0130 17:06:02.044096 4875 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-q6zbs_openshift-nmstate_f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f_0(22cc34dcbfe36bfd607beee0aa5555aefd334924fccd8c0f78ce32ded188be56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:02 crc kubenswrapper[4875]: E0130 17:06:02.044158 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-q6zbs_openshift-nmstate(f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-q6zbs_openshift-nmstate(f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-q6zbs_openshift-nmstate_f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f_0(22cc34dcbfe36bfd607beee0aa5555aefd334924fccd8c0f78ce32ded188be56): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" podUID="f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f" Jan 30 17:06:02 crc kubenswrapper[4875]: I0130 17:06:02.143497 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85cf29f6-017d-475a-b63c-cd1cab3c8132" path="/var/lib/kubelet/pods/85cf29f6-017d-475a-b63c-cd1cab3c8132/volumes" Jan 30 17:06:02 crc kubenswrapper[4875]: I0130 17:06:02.157776 4875 generic.go:334] "Generic (PLEG): container finished" podID="51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f" containerID="41cbebce12d134ab5c4cecac4225d2370add4078f0e948b04a3cd884ce2015b3" exitCode=0 Jan 30 17:06:02 crc kubenswrapper[4875]: I0130 17:06:02.157913 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" event={"ID":"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f","Type":"ContainerDied","Data":"41cbebce12d134ab5c4cecac4225d2370add4078f0e948b04a3cd884ce2015b3"} Jan 30 17:06:02 crc kubenswrapper[4875]: I0130 17:06:02.158139 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" event={"ID":"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f","Type":"ContainerStarted","Data":"483b68ea3abcc9884e9bbd93d9080b38745046d31a4bcbfbbc54d5c098f15e48"} Jan 30 17:06:02 crc kubenswrapper[4875]: I0130 17:06:02.159559 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ck4hq_562b7bc8-0631-497c-9b8a-05af82dcfff9/kube-multus/2.log" Jan 30 17:06:03 crc kubenswrapper[4875]: I0130 17:06:03.168452 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" event={"ID":"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f","Type":"ContainerStarted","Data":"ad96ba03f63f8eb1c04f5fe899c36c477898b3d49e27acb670ecd68569abcd22"} Jan 30 17:06:03 crc kubenswrapper[4875]: I0130 17:06:03.169051 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" event={"ID":"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f","Type":"ContainerStarted","Data":"0024cb41faf19d83483b6abe3a042777ad3fd236e908df156c44599453dc8d97"} Jan 30 17:06:03 crc kubenswrapper[4875]: I0130 17:06:03.169069 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" event={"ID":"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f","Type":"ContainerStarted","Data":"016f0e531275d3ede6eccab54c74118c1e48c2b94a22337137e2ece2b908a05a"} Jan 30 17:06:03 crc kubenswrapper[4875]: I0130 17:06:03.169081 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" event={"ID":"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f","Type":"ContainerStarted","Data":"a351481e08882cf3149fceafa4d62d4b3bad91ddb8208854fae05bc48f128c7f"} Jan 30 17:06:03 crc kubenswrapper[4875]: I0130 17:06:03.169093 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" event={"ID":"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f","Type":"ContainerStarted","Data":"b2a38b3a3a4746e85c48132618c044fb96313fe807b7c7b54a0382e3babe9b5c"} Jan 30 17:06:03 crc kubenswrapper[4875]: I0130 17:06:03.169102 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" event={"ID":"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f","Type":"ContainerStarted","Data":"3f033fe09b9d6bbc8e3d9043c3b50736ac903afaa58c80a66d3cf4377bb1381b"} Jan 30 17:06:06 crc kubenswrapper[4875]: I0130 17:06:06.184577 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" event={"ID":"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f","Type":"ContainerStarted","Data":"2b7b12f5edaccc429870620494c0920135f059574e688b45c295e13914dec95f"} Jan 30 17:06:08 crc kubenswrapper[4875]: I0130 17:06:08.197206 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" event={"ID":"51ccbccb-7241-4c3b-be2a-e6e6ef6ba29f","Type":"ContainerStarted","Data":"b14cb0c7ce68a93815586ebb9453879593e69100c078cbad31e4ccb37c20926c"} Jan 30 17:06:08 crc kubenswrapper[4875]: I0130 17:06:08.197720 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:08 crc kubenswrapper[4875]: I0130 17:06:08.197731 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:08 crc kubenswrapper[4875]: I0130 17:06:08.197753 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:08 crc kubenswrapper[4875]: I0130 17:06:08.223041 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:08 crc kubenswrapper[4875]: I0130 17:06:08.224009 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:08 crc kubenswrapper[4875]: I0130 17:06:08.225670 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" podStartSLOduration=7.225661571 podStartE2EDuration="7.225661571s" podCreationTimestamp="2026-01-30 17:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:06:08.222408772 +0000 UTC m=+578.769772155" watchObservedRunningTime="2026-01-30 17:06:08.225661571 +0000 UTC m=+578.773024954" Jan 30 17:06:08 crc kubenswrapper[4875]: I0130 17:06:08.287755 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q6zbs"] Jan 30 17:06:08 crc kubenswrapper[4875]: I0130 17:06:08.287865 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:08 crc kubenswrapper[4875]: I0130 17:06:08.288228 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:08 crc kubenswrapper[4875]: E0130 17:06:08.309040 4875 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-q6zbs_openshift-nmstate_f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f_0(6ec43f6f73d87c9c51b0c08fc2934812668d9841047ef0f6dcb727ee572c2092): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 17:06:08 crc kubenswrapper[4875]: E0130 17:06:08.309103 4875 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-q6zbs_openshift-nmstate_f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f_0(6ec43f6f73d87c9c51b0c08fc2934812668d9841047ef0f6dcb727ee572c2092): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:08 crc kubenswrapper[4875]: E0130 17:06:08.309124 4875 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-q6zbs_openshift-nmstate_f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f_0(6ec43f6f73d87c9c51b0c08fc2934812668d9841047ef0f6dcb727ee572c2092): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:08 crc kubenswrapper[4875]: E0130 17:06:08.309162 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-q6zbs_openshift-nmstate(f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-q6zbs_openshift-nmstate(f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-q6zbs_openshift-nmstate_f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f_0(6ec43f6f73d87c9c51b0c08fc2934812668d9841047ef0f6dcb727ee572c2092): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" podUID="f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f" Jan 30 17:06:12 crc kubenswrapper[4875]: I0130 17:06:12.136015 4875 scope.go:117] "RemoveContainer" containerID="62c943c842d51e922bb22248b6399f5410f8500f6276b2f741a1e5b35ad9a256" Jan 30 17:06:12 crc kubenswrapper[4875]: E0130 17:06:12.136654 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-ck4hq_openshift-multus(562b7bc8-0631-497c-9b8a-05af82dcfff9)\"" pod="openshift-multus/multus-ck4hq" podUID="562b7bc8-0631-497c-9b8a-05af82dcfff9" Jan 30 17:06:23 crc kubenswrapper[4875]: I0130 17:06:23.135597 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:23 crc kubenswrapper[4875]: I0130 17:06:23.136517 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:23 crc kubenswrapper[4875]: E0130 17:06:23.155761 4875 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-q6zbs_openshift-nmstate_f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f_0(a58985a395bf70cef1afad7d1b4c9256fa714aebf5a3f43b65f196dbed44c4fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 17:06:23 crc kubenswrapper[4875]: E0130 17:06:23.155993 4875 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-q6zbs_openshift-nmstate_f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f_0(a58985a395bf70cef1afad7d1b4c9256fa714aebf5a3f43b65f196dbed44c4fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:23 crc kubenswrapper[4875]: E0130 17:06:23.156020 4875 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-q6zbs_openshift-nmstate_f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f_0(a58985a395bf70cef1afad7d1b4c9256fa714aebf5a3f43b65f196dbed44c4fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:23 crc kubenswrapper[4875]: E0130 17:06:23.156068 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-operator-646758c888-q6zbs_openshift-nmstate(f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-operator-646758c888-q6zbs_openshift-nmstate(f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-operator-646758c888-q6zbs_openshift-nmstate_f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f_0(a58985a395bf70cef1afad7d1b4c9256fa714aebf5a3f43b65f196dbed44c4fa): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" podUID="f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f" Jan 30 17:06:26 crc kubenswrapper[4875]: I0130 17:06:26.135791 4875 scope.go:117] "RemoveContainer" containerID="62c943c842d51e922bb22248b6399f5410f8500f6276b2f741a1e5b35ad9a256" Jan 30 17:06:26 crc kubenswrapper[4875]: I0130 17:06:26.287773 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:06:26 crc kubenswrapper[4875]: I0130 17:06:26.288134 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:06:26 crc kubenswrapper[4875]: I0130 17:06:26.288179 4875 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 17:06:26 crc kubenswrapper[4875]: I0130 17:06:26.288796 4875 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea4fc173ca1c7737282f76b497b93072de498c51c422171abc059436c0e39c75"} pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:06:26 crc kubenswrapper[4875]: I0130 17:06:26.288862 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" containerID="cri-o://ea4fc173ca1c7737282f76b497b93072de498c51c422171abc059436c0e39c75" gracePeriod=600 Jan 30 17:06:26 crc kubenswrapper[4875]: I0130 17:06:26.311886 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ck4hq_562b7bc8-0631-497c-9b8a-05af82dcfff9/kube-multus/2.log" Jan 30 17:06:26 crc kubenswrapper[4875]: I0130 17:06:26.311960 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ck4hq" event={"ID":"562b7bc8-0631-497c-9b8a-05af82dcfff9","Type":"ContainerStarted","Data":"ebbc8962ac3119fb07538aec04ed6be2366e3f70f3913cc8127a989acd4763ed"} Jan 30 17:06:27 crc kubenswrapper[4875]: I0130 17:06:27.319201 4875 generic.go:334] "Generic (PLEG): container finished" podID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerID="ea4fc173ca1c7737282f76b497b93072de498c51c422171abc059436c0e39c75" exitCode=0 Jan 30 17:06:27 crc kubenswrapper[4875]: I0130 17:06:27.319256 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerDied","Data":"ea4fc173ca1c7737282f76b497b93072de498c51c422171abc059436c0e39c75"} Jan 30 17:06:27 crc kubenswrapper[4875]: I0130 17:06:27.319769 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerStarted","Data":"44cbbe2347c99f305a77309b497f459a3e30dcbc1e853b9af4c1697fcc292f86"} Jan 30 17:06:27 crc kubenswrapper[4875]: I0130 17:06:27.319814 4875 scope.go:117] "RemoveContainer" containerID="12371742fd50f0efbcda52c6975077df5a1e419df1f9382a50ead1f6472b0960" Jan 30 17:06:31 crc kubenswrapper[4875]: I0130 17:06:31.458988 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-24kzl" Jan 30 17:06:36 crc kubenswrapper[4875]: I0130 17:06:36.135246 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:36 crc kubenswrapper[4875]: I0130 17:06:36.136238 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" Jan 30 17:06:36 crc kubenswrapper[4875]: I0130 17:06:36.338137 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-q6zbs"] Jan 30 17:06:36 crc kubenswrapper[4875]: W0130 17:06:36.354823 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf004abd4_e3a2_4f6e_8c3c_85202b7a4b9f.slice/crio-3a0dd640df8242b4adcdb6b3a00042bed8aa9db94a728816e53ae5b3b20daca8 WatchSource:0}: Error finding container 3a0dd640df8242b4adcdb6b3a00042bed8aa9db94a728816e53ae5b3b20daca8: Status 404 returned error can't find the container with id 3a0dd640df8242b4adcdb6b3a00042bed8aa9db94a728816e53ae5b3b20daca8 Jan 30 17:06:36 crc kubenswrapper[4875]: I0130 17:06:36.367552 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" event={"ID":"f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f","Type":"ContainerStarted","Data":"3a0dd640df8242b4adcdb6b3a00042bed8aa9db94a728816e53ae5b3b20daca8"} Jan 30 17:06:39 crc kubenswrapper[4875]: I0130 17:06:39.389754 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" event={"ID":"f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f","Type":"ContainerStarted","Data":"69b1d0ce87d73e209bb32db63c58a0cf4934eb0702f21c11527dd54ae03cbe27"} Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.947056 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-q6zbs" podStartSLOduration=37.504690248 podStartE2EDuration="39.947034562s" podCreationTimestamp="2026-01-30 17:06:01 +0000 UTC" firstStartedPulling="2026-01-30 17:06:36.357896663 +0000 UTC m=+606.905260046" lastFinishedPulling="2026-01-30 17:06:38.800240977 +0000 UTC m=+609.347604360" observedRunningTime="2026-01-30 17:06:39.417637344 +0000 UTC m=+609.965000727" watchObservedRunningTime="2026-01-30 17:06:40.947034562 +0000 UTC m=+611.494397955" Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.952106 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k57t9"] Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.953107 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-k57t9" Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.956406 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk"] Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.957286 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk" Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.960257 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2k5j4" Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.961948 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.971864 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k57t9"] Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.977958 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-s6n6v"] Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.990599 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.994284 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/227eb898-0116-4963-9c36-991e1d69089b-nmstate-lock\") pod \"nmstate-handler-s6n6v\" (UID: \"227eb898-0116-4963-9c36-991e1d69089b\") " pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.994347 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/227eb898-0116-4963-9c36-991e1d69089b-dbus-socket\") pod \"nmstate-handler-s6n6v\" (UID: \"227eb898-0116-4963-9c36-991e1d69089b\") " pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.994471 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/227eb898-0116-4963-9c36-991e1d69089b-ovs-socket\") pod \"nmstate-handler-s6n6v\" (UID: \"227eb898-0116-4963-9c36-991e1d69089b\") " pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:40 crc kubenswrapper[4875]: I0130 17:06:40.994745 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x59w2\" (UniqueName: \"kubernetes.io/projected/227eb898-0116-4963-9c36-991e1d69089b-kube-api-access-x59w2\") pod \"nmstate-handler-s6n6v\" (UID: \"227eb898-0116-4963-9c36-991e1d69089b\") " pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.013206 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk"] Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.093251 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb"] Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.093915 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.097941 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.098107 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.099064 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlq77\" (UniqueName: \"kubernetes.io/projected/07aa98a9-5198-4088-abe2-c57d80a64e3e-kube-api-access-vlq77\") pod \"nmstate-webhook-8474b5b9d8-47mwk\" (UID: \"07aa98a9-5198-4088-abe2-c57d80a64e3e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.099114 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92pl2\" (UniqueName: \"kubernetes.io/projected/10d88af6-3015-4590-af17-92693e9d5c2d-kube-api-access-92pl2\") pod \"nmstate-metrics-54757c584b-k57t9\" (UID: \"10d88af6-3015-4590-af17-92693e9d5c2d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k57t9" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.099181 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/227eb898-0116-4963-9c36-991e1d69089b-nmstate-lock\") pod \"nmstate-handler-s6n6v\" (UID: \"227eb898-0116-4963-9c36-991e1d69089b\") " pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.099220 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/227eb898-0116-4963-9c36-991e1d69089b-dbus-socket\") pod \"nmstate-handler-s6n6v\" (UID: \"227eb898-0116-4963-9c36-991e1d69089b\") " pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.099246 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8s6p\" (UniqueName: \"kubernetes.io/projected/27bed214-93d4-493b-a471-2f0913007e55-kube-api-access-t8s6p\") pod \"nmstate-console-plugin-7754f76f8b-cjpzb\" (UID: \"27bed214-93d4-493b-a471-2f0913007e55\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.099269 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/227eb898-0116-4963-9c36-991e1d69089b-ovs-socket\") pod \"nmstate-handler-s6n6v\" (UID: \"227eb898-0116-4963-9c36-991e1d69089b\") " pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.099309 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/27bed214-93d4-493b-a471-2f0913007e55-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cjpzb\" (UID: \"27bed214-93d4-493b-a471-2f0913007e55\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.099335 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/07aa98a9-5198-4088-abe2-c57d80a64e3e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-47mwk\" (UID: \"07aa98a9-5198-4088-abe2-c57d80a64e3e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.099353 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/27bed214-93d4-493b-a471-2f0913007e55-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cjpzb\" (UID: \"27bed214-93d4-493b-a471-2f0913007e55\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.099376 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x59w2\" (UniqueName: \"kubernetes.io/projected/227eb898-0116-4963-9c36-991e1d69089b-kube-api-access-x59w2\") pod \"nmstate-handler-s6n6v\" (UID: \"227eb898-0116-4963-9c36-991e1d69089b\") " pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.099834 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/227eb898-0116-4963-9c36-991e1d69089b-nmstate-lock\") pod \"nmstate-handler-s6n6v\" (UID: \"227eb898-0116-4963-9c36-991e1d69089b\") " pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.100155 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/227eb898-0116-4963-9c36-991e1d69089b-dbus-socket\") pod \"nmstate-handler-s6n6v\" (UID: \"227eb898-0116-4963-9c36-991e1d69089b\") " pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.100569 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/227eb898-0116-4963-9c36-991e1d69089b-ovs-socket\") pod \"nmstate-handler-s6n6v\" (UID: \"227eb898-0116-4963-9c36-991e1d69089b\") " pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.115766 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-vppwn" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.121135 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb"] Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.133043 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x59w2\" (UniqueName: \"kubernetes.io/projected/227eb898-0116-4963-9c36-991e1d69089b-kube-api-access-x59w2\") pod \"nmstate-handler-s6n6v\" (UID: \"227eb898-0116-4963-9c36-991e1d69089b\") " pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.200884 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8s6p\" (UniqueName: \"kubernetes.io/projected/27bed214-93d4-493b-a471-2f0913007e55-kube-api-access-t8s6p\") pod \"nmstate-console-plugin-7754f76f8b-cjpzb\" (UID: \"27bed214-93d4-493b-a471-2f0913007e55\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.201055 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/27bed214-93d4-493b-a471-2f0913007e55-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cjpzb\" (UID: \"27bed214-93d4-493b-a471-2f0913007e55\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.201093 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/07aa98a9-5198-4088-abe2-c57d80a64e3e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-47mwk\" (UID: \"07aa98a9-5198-4088-abe2-c57d80a64e3e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.201113 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/27bed214-93d4-493b-a471-2f0913007e55-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cjpzb\" (UID: \"27bed214-93d4-493b-a471-2f0913007e55\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.201179 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlq77\" (UniqueName: \"kubernetes.io/projected/07aa98a9-5198-4088-abe2-c57d80a64e3e-kube-api-access-vlq77\") pod \"nmstate-webhook-8474b5b9d8-47mwk\" (UID: \"07aa98a9-5198-4088-abe2-c57d80a64e3e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.201200 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92pl2\" (UniqueName: \"kubernetes.io/projected/10d88af6-3015-4590-af17-92693e9d5c2d-kube-api-access-92pl2\") pod \"nmstate-metrics-54757c584b-k57t9\" (UID: \"10d88af6-3015-4590-af17-92693e9d5c2d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k57t9" Jan 30 17:06:41 crc kubenswrapper[4875]: E0130 17:06:41.201575 4875 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 30 17:06:41 crc kubenswrapper[4875]: E0130 17:06:41.201640 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27bed214-93d4-493b-a471-2f0913007e55-plugin-serving-cert podName:27bed214-93d4-493b-a471-2f0913007e55 nodeName:}" failed. No retries permitted until 2026-01-30 17:06:41.70162427 +0000 UTC m=+612.248987653 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/27bed214-93d4-493b-a471-2f0913007e55-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-cjpzb" (UID: "27bed214-93d4-493b-a471-2f0913007e55") : secret "plugin-serving-cert" not found Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.202655 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/27bed214-93d4-493b-a471-2f0913007e55-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-cjpzb\" (UID: \"27bed214-93d4-493b-a471-2f0913007e55\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.216819 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/07aa98a9-5198-4088-abe2-c57d80a64e3e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-47mwk\" (UID: \"07aa98a9-5198-4088-abe2-c57d80a64e3e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.219013 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8s6p\" (UniqueName: \"kubernetes.io/projected/27bed214-93d4-493b-a471-2f0913007e55-kube-api-access-t8s6p\") pod \"nmstate-console-plugin-7754f76f8b-cjpzb\" (UID: \"27bed214-93d4-493b-a471-2f0913007e55\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.222462 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlq77\" (UniqueName: \"kubernetes.io/projected/07aa98a9-5198-4088-abe2-c57d80a64e3e-kube-api-access-vlq77\") pod \"nmstate-webhook-8474b5b9d8-47mwk\" (UID: \"07aa98a9-5198-4088-abe2-c57d80a64e3e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.222816 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92pl2\" (UniqueName: \"kubernetes.io/projected/10d88af6-3015-4590-af17-92693e9d5c2d-kube-api-access-92pl2\") pod \"nmstate-metrics-54757c584b-k57t9\" (UID: \"10d88af6-3015-4590-af17-92693e9d5c2d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k57t9" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.268301 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-k57t9" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.286155 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.303274 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6d57f9bdc4-l7vr6"] Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.308800 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.314207 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.317120 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d57f9bdc4-l7vr6"] Jan 30 17:06:41 crc kubenswrapper[4875]: W0130 17:06:41.356756 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod227eb898_0116_4963_9c36_991e1d69089b.slice/crio-ad7091723e0383f8d8850adc1c735052d697625ef3c76dde1eeb4f479975fcb5 WatchSource:0}: Error finding container ad7091723e0383f8d8850adc1c735052d697625ef3c76dde1eeb4f479975fcb5: Status 404 returned error can't find the container with id ad7091723e0383f8d8850adc1c735052d697625ef3c76dde1eeb4f479975fcb5 Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.402686 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqpg\" (UniqueName: \"kubernetes.io/projected/bb353aed-75ea-48c9-b25a-c6efa179a364-kube-api-access-lpqpg\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.402738 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb353aed-75ea-48c9-b25a-c6efa179a364-oauth-serving-cert\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.402768 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb353aed-75ea-48c9-b25a-c6efa179a364-trusted-ca-bundle\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.402811 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb353aed-75ea-48c9-b25a-c6efa179a364-console-serving-cert\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.402916 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb353aed-75ea-48c9-b25a-c6efa179a364-service-ca\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.402942 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb353aed-75ea-48c9-b25a-c6efa179a364-console-config\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.402970 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb353aed-75ea-48c9-b25a-c6efa179a364-console-oauth-config\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.408263 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-s6n6v" event={"ID":"227eb898-0116-4963-9c36-991e1d69089b","Type":"ContainerStarted","Data":"ad7091723e0383f8d8850adc1c735052d697625ef3c76dde1eeb4f479975fcb5"} Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.503803 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb353aed-75ea-48c9-b25a-c6efa179a364-oauth-serving-cert\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.503837 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb353aed-75ea-48c9-b25a-c6efa179a364-trusted-ca-bundle\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.503869 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb353aed-75ea-48c9-b25a-c6efa179a364-console-serving-cert\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.503923 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb353aed-75ea-48c9-b25a-c6efa179a364-service-ca\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.503941 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb353aed-75ea-48c9-b25a-c6efa179a364-console-config\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.503961 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb353aed-75ea-48c9-b25a-c6efa179a364-console-oauth-config\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.503999 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpqpg\" (UniqueName: \"kubernetes.io/projected/bb353aed-75ea-48c9-b25a-c6efa179a364-kube-api-access-lpqpg\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.505659 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb353aed-75ea-48c9-b25a-c6efa179a364-oauth-serving-cert\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.505735 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb353aed-75ea-48c9-b25a-c6efa179a364-trusted-ca-bundle\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.506405 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb353aed-75ea-48c9-b25a-c6efa179a364-service-ca\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.506414 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb353aed-75ea-48c9-b25a-c6efa179a364-console-config\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.506961 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k57t9"] Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.511186 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb353aed-75ea-48c9-b25a-c6efa179a364-console-serving-cert\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.511755 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb353aed-75ea-48c9-b25a-c6efa179a364-console-oauth-config\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.524989 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpqpg\" (UniqueName: \"kubernetes.io/projected/bb353aed-75ea-48c9-b25a-c6efa179a364-kube-api-access-lpqpg\") pod \"console-6d57f9bdc4-l7vr6\" (UID: \"bb353aed-75ea-48c9-b25a-c6efa179a364\") " pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.535846 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk"] Jan 30 17:06:41 crc kubenswrapper[4875]: W0130 17:06:41.538721 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07aa98a9_5198_4088_abe2_c57d80a64e3e.slice/crio-5882b69c559340f8d21323696706b6e428b937e54a0e6c4d96fa9f8293196468 WatchSource:0}: Error finding container 5882b69c559340f8d21323696706b6e428b937e54a0e6c4d96fa9f8293196468: Status 404 returned error can't find the container with id 5882b69c559340f8d21323696706b6e428b937e54a0e6c4d96fa9f8293196468 Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.640947 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.708629 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/27bed214-93d4-493b-a471-2f0913007e55-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cjpzb\" (UID: \"27bed214-93d4-493b-a471-2f0913007e55\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.711967 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/27bed214-93d4-493b-a471-2f0913007e55-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-cjpzb\" (UID: \"27bed214-93d4-493b-a471-2f0913007e55\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.714422 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" Jan 30 17:06:41 crc kubenswrapper[4875]: I0130 17:06:41.794915 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d57f9bdc4-l7vr6"] Jan 30 17:06:41 crc kubenswrapper[4875]: W0130 17:06:41.806878 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb353aed_75ea_48c9_b25a_c6efa179a364.slice/crio-9b1204bfcdcc5d7d1c55073196fef26d986112458627334fc3d7179d64ff3196 WatchSource:0}: Error finding container 9b1204bfcdcc5d7d1c55073196fef26d986112458627334fc3d7179d64ff3196: Status 404 returned error can't find the container with id 9b1204bfcdcc5d7d1c55073196fef26d986112458627334fc3d7179d64ff3196 Jan 30 17:06:42 crc kubenswrapper[4875]: I0130 17:06:42.094071 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb"] Jan 30 17:06:42 crc kubenswrapper[4875]: W0130 17:06:42.100185 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27bed214_93d4_493b_a471_2f0913007e55.slice/crio-2baeee80528c7dbfdab0ed2eb5dba9b226e2eefaffc8c1c028bc9be85967abd0 WatchSource:0}: Error finding container 2baeee80528c7dbfdab0ed2eb5dba9b226e2eefaffc8c1c028bc9be85967abd0: Status 404 returned error can't find the container with id 2baeee80528c7dbfdab0ed2eb5dba9b226e2eefaffc8c1c028bc9be85967abd0 Jan 30 17:06:42 crc kubenswrapper[4875]: I0130 17:06:42.414816 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d57f9bdc4-l7vr6" event={"ID":"bb353aed-75ea-48c9-b25a-c6efa179a364","Type":"ContainerStarted","Data":"5cb8bdbe33e652ed8d0643684974c9b1818446957e3562aae7a068377afbb667"} Jan 30 17:06:42 crc kubenswrapper[4875]: I0130 17:06:42.414859 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d57f9bdc4-l7vr6" event={"ID":"bb353aed-75ea-48c9-b25a-c6efa179a364","Type":"ContainerStarted","Data":"9b1204bfcdcc5d7d1c55073196fef26d986112458627334fc3d7179d64ff3196"} Jan 30 17:06:42 crc kubenswrapper[4875]: I0130 17:06:42.415928 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" event={"ID":"27bed214-93d4-493b-a471-2f0913007e55","Type":"ContainerStarted","Data":"2baeee80528c7dbfdab0ed2eb5dba9b226e2eefaffc8c1c028bc9be85967abd0"} Jan 30 17:06:42 crc kubenswrapper[4875]: I0130 17:06:42.417051 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk" event={"ID":"07aa98a9-5198-4088-abe2-c57d80a64e3e","Type":"ContainerStarted","Data":"5882b69c559340f8d21323696706b6e428b937e54a0e6c4d96fa9f8293196468"} Jan 30 17:06:42 crc kubenswrapper[4875]: I0130 17:06:42.417854 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k57t9" event={"ID":"10d88af6-3015-4590-af17-92693e9d5c2d","Type":"ContainerStarted","Data":"679789bc1952b01e4fef998287aec79b004a009d3e3780267bb5eca5a161e2ce"} Jan 30 17:06:44 crc kubenswrapper[4875]: I0130 17:06:44.429483 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk" event={"ID":"07aa98a9-5198-4088-abe2-c57d80a64e3e","Type":"ContainerStarted","Data":"631a15621c37c125dad064e5c5d5ee705dea5e40c63bd53f0fb0fdcd617ce16d"} Jan 30 17:06:44 crc kubenswrapper[4875]: I0130 17:06:44.429865 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk" Jan 30 17:06:44 crc kubenswrapper[4875]: I0130 17:06:44.430823 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k57t9" event={"ID":"10d88af6-3015-4590-af17-92693e9d5c2d","Type":"ContainerStarted","Data":"b3daca52b7553898f2b229791441629ae6817eda8b36dbc922bbe58399a257f1"} Jan 30 17:06:44 crc kubenswrapper[4875]: I0130 17:06:44.432685 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-s6n6v" event={"ID":"227eb898-0116-4963-9c36-991e1d69089b","Type":"ContainerStarted","Data":"2e5a7353d4f5f7757bdcf1526a80a77c1734cbaa38d43f4de0b3fbed078dcd48"} Jan 30 17:06:44 crc kubenswrapper[4875]: I0130 17:06:44.432783 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:44 crc kubenswrapper[4875]: I0130 17:06:44.449980 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk" podStartSLOduration=1.936263455 podStartE2EDuration="4.449957742s" podCreationTimestamp="2026-01-30 17:06:40 +0000 UTC" firstStartedPulling="2026-01-30 17:06:41.540659768 +0000 UTC m=+612.088023141" lastFinishedPulling="2026-01-30 17:06:44.054354015 +0000 UTC m=+614.601717428" observedRunningTime="2026-01-30 17:06:44.44245707 +0000 UTC m=+614.989820483" watchObservedRunningTime="2026-01-30 17:06:44.449957742 +0000 UTC m=+614.997321125" Jan 30 17:06:44 crc kubenswrapper[4875]: I0130 17:06:44.450387 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6d57f9bdc4-l7vr6" podStartSLOduration=3.450380122 podStartE2EDuration="3.450380122s" podCreationTimestamp="2026-01-30 17:06:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:06:42.438859057 +0000 UTC m=+612.986222440" watchObservedRunningTime="2026-01-30 17:06:44.450380122 +0000 UTC m=+614.997743505" Jan 30 17:06:44 crc kubenswrapper[4875]: I0130 17:06:44.461087 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-s6n6v" podStartSLOduration=1.783846482 podStartE2EDuration="4.461070558s" podCreationTimestamp="2026-01-30 17:06:40 +0000 UTC" firstStartedPulling="2026-01-30 17:06:41.358967301 +0000 UTC m=+611.906330684" lastFinishedPulling="2026-01-30 17:06:44.036191377 +0000 UTC m=+614.583554760" observedRunningTime="2026-01-30 17:06:44.459428581 +0000 UTC m=+615.006791964" watchObservedRunningTime="2026-01-30 17:06:44.461070558 +0000 UTC m=+615.008433941" Jan 30 17:06:45 crc kubenswrapper[4875]: I0130 17:06:45.437999 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" event={"ID":"27bed214-93d4-493b-a471-2f0913007e55","Type":"ContainerStarted","Data":"61d6a1a5d3780b9c68d64a60edeb008a35b1ec020e81b6ed3c811e0865e349bd"} Jan 30 17:06:45 crc kubenswrapper[4875]: I0130 17:06:45.455767 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-cjpzb" podStartSLOduration=1.641351883 podStartE2EDuration="4.45574423s" podCreationTimestamp="2026-01-30 17:06:41 +0000 UTC" firstStartedPulling="2026-01-30 17:06:42.103170431 +0000 UTC m=+612.650533814" lastFinishedPulling="2026-01-30 17:06:44.917562778 +0000 UTC m=+615.464926161" observedRunningTime="2026-01-30 17:06:45.451733388 +0000 UTC m=+615.999096791" watchObservedRunningTime="2026-01-30 17:06:45.45574423 +0000 UTC m=+616.003107623" Jan 30 17:06:46 crc kubenswrapper[4875]: I0130 17:06:46.446003 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k57t9" event={"ID":"10d88af6-3015-4590-af17-92693e9d5c2d","Type":"ContainerStarted","Data":"6a6cd8df5f36acc666256d8e9f8ab37e067edc76446638d4ffb96b7c5ea81e81"} Jan 30 17:06:46 crc kubenswrapper[4875]: I0130 17:06:46.463565 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-k57t9" podStartSLOduration=1.743503623 podStartE2EDuration="6.463548375s" podCreationTimestamp="2026-01-30 17:06:40 +0000 UTC" firstStartedPulling="2026-01-30 17:06:41.521426865 +0000 UTC m=+612.068790248" lastFinishedPulling="2026-01-30 17:06:46.241471627 +0000 UTC m=+616.788835000" observedRunningTime="2026-01-30 17:06:46.459541882 +0000 UTC m=+617.006905265" watchObservedRunningTime="2026-01-30 17:06:46.463548375 +0000 UTC m=+617.010911768" Jan 30 17:06:51 crc kubenswrapper[4875]: I0130 17:06:51.336892 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-s6n6v" Jan 30 17:06:51 crc kubenswrapper[4875]: I0130 17:06:51.641960 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:51 crc kubenswrapper[4875]: I0130 17:06:51.642012 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:51 crc kubenswrapper[4875]: I0130 17:06:51.647201 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:52 crc kubenswrapper[4875]: I0130 17:06:52.480914 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6d57f9bdc4-l7vr6" Jan 30 17:06:52 crc kubenswrapper[4875]: I0130 17:06:52.523713 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7s4zv"] Jan 30 17:07:01 crc kubenswrapper[4875]: I0130 17:07:01.294266 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47mwk" Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.397246 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs"] Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.398653 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.400742 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.407837 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs"] Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.537447 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4d7437b-5c96-4130-93dc-119f95d08e50-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs\" (UID: \"b4d7437b-5c96-4130-93dc-119f95d08e50\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.537524 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bktf\" (UniqueName: \"kubernetes.io/projected/b4d7437b-5c96-4130-93dc-119f95d08e50-kube-api-access-5bktf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs\" (UID: \"b4d7437b-5c96-4130-93dc-119f95d08e50\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.537566 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4d7437b-5c96-4130-93dc-119f95d08e50-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs\" (UID: \"b4d7437b-5c96-4130-93dc-119f95d08e50\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.639153 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4d7437b-5c96-4130-93dc-119f95d08e50-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs\" (UID: \"b4d7437b-5c96-4130-93dc-119f95d08e50\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.639222 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bktf\" (UniqueName: \"kubernetes.io/projected/b4d7437b-5c96-4130-93dc-119f95d08e50-kube-api-access-5bktf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs\" (UID: \"b4d7437b-5c96-4130-93dc-119f95d08e50\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.639261 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4d7437b-5c96-4130-93dc-119f95d08e50-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs\" (UID: \"b4d7437b-5c96-4130-93dc-119f95d08e50\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.639682 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4d7437b-5c96-4130-93dc-119f95d08e50-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs\" (UID: \"b4d7437b-5c96-4130-93dc-119f95d08e50\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.639694 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4d7437b-5c96-4130-93dc-119f95d08e50-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs\" (UID: \"b4d7437b-5c96-4130-93dc-119f95d08e50\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.657436 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bktf\" (UniqueName: \"kubernetes.io/projected/b4d7437b-5c96-4130-93dc-119f95d08e50-kube-api-access-5bktf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs\" (UID: \"b4d7437b-5c96-4130-93dc-119f95d08e50\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.716566 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" Jan 30 17:07:12 crc kubenswrapper[4875]: I0130 17:07:12.884125 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs"] Jan 30 17:07:13 crc kubenswrapper[4875]: I0130 17:07:13.607980 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" event={"ID":"b4d7437b-5c96-4130-93dc-119f95d08e50","Type":"ContainerStarted","Data":"5921d1ff5863550e36be31987a6e10070c236d8df067fbd1d24279ce9c4e4724"} Jan 30 17:07:14 crc kubenswrapper[4875]: I0130 17:07:14.613746 4875 generic.go:334] "Generic (PLEG): container finished" podID="b4d7437b-5c96-4130-93dc-119f95d08e50" containerID="2392777c812642f146a030b9aa1ab7509f98befeec1a34a44fa24b46ded505bc" exitCode=0 Jan 30 17:07:14 crc kubenswrapper[4875]: I0130 17:07:14.613847 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" event={"ID":"b4d7437b-5c96-4130-93dc-119f95d08e50","Type":"ContainerDied","Data":"2392777c812642f146a030b9aa1ab7509f98befeec1a34a44fa24b46ded505bc"} Jan 30 17:07:16 crc kubenswrapper[4875]: I0130 17:07:16.637145 4875 generic.go:334] "Generic (PLEG): container finished" podID="b4d7437b-5c96-4130-93dc-119f95d08e50" containerID="1acf5c793ec64f0b6dad2eb58c273d7a6c28ec13a150c3b15b76cab929b11d96" exitCode=0 Jan 30 17:07:16 crc kubenswrapper[4875]: I0130 17:07:16.637287 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" event={"ID":"b4d7437b-5c96-4130-93dc-119f95d08e50","Type":"ContainerDied","Data":"1acf5c793ec64f0b6dad2eb58c273d7a6c28ec13a150c3b15b76cab929b11d96"} Jan 30 17:07:17 crc kubenswrapper[4875]: I0130 17:07:17.559456 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-7s4zv" podUID="37fa5454-ad47-4960-be87-5d9d4e4eab0f" containerName="console" containerID="cri-o://7b2bdbbeadc8800eb70ba36d1807dcfb88b324469fac8765274d0e7bea5a7d46" gracePeriod=15 Jan 30 17:07:17 crc kubenswrapper[4875]: I0130 17:07:17.648741 4875 generic.go:334] "Generic (PLEG): container finished" podID="b4d7437b-5c96-4130-93dc-119f95d08e50" containerID="14fb0aaf687b09423bada1f029d4f3ac546a9f54407ce6b4cf3a31e07ae85b7b" exitCode=0 Jan 30 17:07:17 crc kubenswrapper[4875]: I0130 17:07:17.648832 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" event={"ID":"b4d7437b-5c96-4130-93dc-119f95d08e50","Type":"ContainerDied","Data":"14fb0aaf687b09423bada1f029d4f3ac546a9f54407ce6b4cf3a31e07ae85b7b"} Jan 30 17:07:17 crc kubenswrapper[4875]: I0130 17:07:17.980139 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7s4zv_37fa5454-ad47-4960-be87-5d9d4e4eab0f/console/0.log" Jan 30 17:07:17 crc kubenswrapper[4875]: I0130 17:07:17.980248 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.109424 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-trusted-ca-bundle\") pod \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.109502 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-oauth-config\") pod \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.109629 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsdjs\" (UniqueName: \"kubernetes.io/projected/37fa5454-ad47-4960-be87-5d9d4e4eab0f-kube-api-access-hsdjs\") pod \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.109690 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-service-ca\") pod \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.109746 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-config\") pod \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.109796 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-oauth-serving-cert\") pod \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.109825 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-serving-cert\") pod \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\" (UID: \"37fa5454-ad47-4960-be87-5d9d4e4eab0f\") " Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.110400 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "37fa5454-ad47-4960-be87-5d9d4e4eab0f" (UID: "37fa5454-ad47-4960-be87-5d9d4e4eab0f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.110548 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-service-ca" (OuterVolumeSpecName: "service-ca") pod "37fa5454-ad47-4960-be87-5d9d4e4eab0f" (UID: "37fa5454-ad47-4960-be87-5d9d4e4eab0f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.110961 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "37fa5454-ad47-4960-be87-5d9d4e4eab0f" (UID: "37fa5454-ad47-4960-be87-5d9d4e4eab0f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.111025 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-config" (OuterVolumeSpecName: "console-config") pod "37fa5454-ad47-4960-be87-5d9d4e4eab0f" (UID: "37fa5454-ad47-4960-be87-5d9d4e4eab0f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.118377 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fa5454-ad47-4960-be87-5d9d4e4eab0f-kube-api-access-hsdjs" (OuterVolumeSpecName: "kube-api-access-hsdjs") pod "37fa5454-ad47-4960-be87-5d9d4e4eab0f" (UID: "37fa5454-ad47-4960-be87-5d9d4e4eab0f"). InnerVolumeSpecName "kube-api-access-hsdjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.119217 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "37fa5454-ad47-4960-be87-5d9d4e4eab0f" (UID: "37fa5454-ad47-4960-be87-5d9d4e4eab0f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.119454 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "37fa5454-ad47-4960-be87-5d9d4e4eab0f" (UID: "37fa5454-ad47-4960-be87-5d9d4e4eab0f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.211049 4875 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.211101 4875 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.211114 4875 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.211125 4875 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.211136 4875 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/37fa5454-ad47-4960-be87-5d9d4e4eab0f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.211150 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsdjs\" (UniqueName: \"kubernetes.io/projected/37fa5454-ad47-4960-be87-5d9d4e4eab0f-kube-api-access-hsdjs\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.211165 4875 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/37fa5454-ad47-4960-be87-5d9d4e4eab0f-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.656961 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7s4zv_37fa5454-ad47-4960-be87-5d9d4e4eab0f/console/0.log" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.657014 4875 generic.go:334] "Generic (PLEG): container finished" podID="37fa5454-ad47-4960-be87-5d9d4e4eab0f" containerID="7b2bdbbeadc8800eb70ba36d1807dcfb88b324469fac8765274d0e7bea5a7d46" exitCode=2 Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.657119 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7s4zv" event={"ID":"37fa5454-ad47-4960-be87-5d9d4e4eab0f","Type":"ContainerDied","Data":"7b2bdbbeadc8800eb70ba36d1807dcfb88b324469fac8765274d0e7bea5a7d46"} Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.657145 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7s4zv" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.657185 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7s4zv" event={"ID":"37fa5454-ad47-4960-be87-5d9d4e4eab0f","Type":"ContainerDied","Data":"7cecdaebedeb9d659dc44a872680c8161e985be854bf31e31b9c7da69133a52f"} Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.657218 4875 scope.go:117] "RemoveContainer" containerID="7b2bdbbeadc8800eb70ba36d1807dcfb88b324469fac8765274d0e7bea5a7d46" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.678931 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7s4zv"] Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.680313 4875 scope.go:117] "RemoveContainer" containerID="7b2bdbbeadc8800eb70ba36d1807dcfb88b324469fac8765274d0e7bea5a7d46" Jan 30 17:07:18 crc kubenswrapper[4875]: E0130 17:07:18.680766 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b2bdbbeadc8800eb70ba36d1807dcfb88b324469fac8765274d0e7bea5a7d46\": container with ID starting with 7b2bdbbeadc8800eb70ba36d1807dcfb88b324469fac8765274d0e7bea5a7d46 not found: ID does not exist" containerID="7b2bdbbeadc8800eb70ba36d1807dcfb88b324469fac8765274d0e7bea5a7d46" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.680815 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b2bdbbeadc8800eb70ba36d1807dcfb88b324469fac8765274d0e7bea5a7d46"} err="failed to get container status \"7b2bdbbeadc8800eb70ba36d1807dcfb88b324469fac8765274d0e7bea5a7d46\": rpc error: code = NotFound desc = could not find container \"7b2bdbbeadc8800eb70ba36d1807dcfb88b324469fac8765274d0e7bea5a7d46\": container with ID starting with 7b2bdbbeadc8800eb70ba36d1807dcfb88b324469fac8765274d0e7bea5a7d46 not found: ID does not exist" Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.683039 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-7s4zv"] Jan 30 17:07:18 crc kubenswrapper[4875]: I0130 17:07:18.975783 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" Jan 30 17:07:19 crc kubenswrapper[4875]: I0130 17:07:19.124194 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4d7437b-5c96-4130-93dc-119f95d08e50-util\") pod \"b4d7437b-5c96-4130-93dc-119f95d08e50\" (UID: \"b4d7437b-5c96-4130-93dc-119f95d08e50\") " Jan 30 17:07:19 crc kubenswrapper[4875]: I0130 17:07:19.124261 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4d7437b-5c96-4130-93dc-119f95d08e50-bundle\") pod \"b4d7437b-5c96-4130-93dc-119f95d08e50\" (UID: \"b4d7437b-5c96-4130-93dc-119f95d08e50\") " Jan 30 17:07:19 crc kubenswrapper[4875]: I0130 17:07:19.124369 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bktf\" (UniqueName: \"kubernetes.io/projected/b4d7437b-5c96-4130-93dc-119f95d08e50-kube-api-access-5bktf\") pod \"b4d7437b-5c96-4130-93dc-119f95d08e50\" (UID: \"b4d7437b-5c96-4130-93dc-119f95d08e50\") " Jan 30 17:07:19 crc kubenswrapper[4875]: I0130 17:07:19.126327 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d7437b-5c96-4130-93dc-119f95d08e50-bundle" (OuterVolumeSpecName: "bundle") pod "b4d7437b-5c96-4130-93dc-119f95d08e50" (UID: "b4d7437b-5c96-4130-93dc-119f95d08e50"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:07:19 crc kubenswrapper[4875]: I0130 17:07:19.128701 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d7437b-5c96-4130-93dc-119f95d08e50-kube-api-access-5bktf" (OuterVolumeSpecName: "kube-api-access-5bktf") pod "b4d7437b-5c96-4130-93dc-119f95d08e50" (UID: "b4d7437b-5c96-4130-93dc-119f95d08e50"). InnerVolumeSpecName "kube-api-access-5bktf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:07:19 crc kubenswrapper[4875]: I0130 17:07:19.225924 4875 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4d7437b-5c96-4130-93dc-119f95d08e50-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:19 crc kubenswrapper[4875]: I0130 17:07:19.226192 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bktf\" (UniqueName: \"kubernetes.io/projected/b4d7437b-5c96-4130-93dc-119f95d08e50-kube-api-access-5bktf\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:19 crc kubenswrapper[4875]: I0130 17:07:19.291813 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d7437b-5c96-4130-93dc-119f95d08e50-util" (OuterVolumeSpecName: "util") pod "b4d7437b-5c96-4130-93dc-119f95d08e50" (UID: "b4d7437b-5c96-4130-93dc-119f95d08e50"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:07:19 crc kubenswrapper[4875]: I0130 17:07:19.326920 4875 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4d7437b-5c96-4130-93dc-119f95d08e50-util\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:19 crc kubenswrapper[4875]: I0130 17:07:19.669428 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" event={"ID":"b4d7437b-5c96-4130-93dc-119f95d08e50","Type":"ContainerDied","Data":"5921d1ff5863550e36be31987a6e10070c236d8df067fbd1d24279ce9c4e4724"} Jan 30 17:07:19 crc kubenswrapper[4875]: I0130 17:07:19.669466 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5921d1ff5863550e36be31987a6e10070c236d8df067fbd1d24279ce9c4e4724" Jan 30 17:07:19 crc kubenswrapper[4875]: I0130 17:07:19.669519 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs" Jan 30 17:07:20 crc kubenswrapper[4875]: I0130 17:07:20.148328 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37fa5454-ad47-4960-be87-5d9d4e4eab0f" path="/var/lib/kubelet/pods/37fa5454-ad47-4960-be87-5d9d4e4eab0f/volumes" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.071721 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc"] Jan 30 17:07:28 crc kubenswrapper[4875]: E0130 17:07:28.072512 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d7437b-5c96-4130-93dc-119f95d08e50" containerName="util" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.072527 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d7437b-5c96-4130-93dc-119f95d08e50" containerName="util" Jan 30 17:07:28 crc kubenswrapper[4875]: E0130 17:07:28.072536 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d7437b-5c96-4130-93dc-119f95d08e50" containerName="pull" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.072542 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d7437b-5c96-4130-93dc-119f95d08e50" containerName="pull" Jan 30 17:07:28 crc kubenswrapper[4875]: E0130 17:07:28.072560 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d7437b-5c96-4130-93dc-119f95d08e50" containerName="extract" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.072567 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d7437b-5c96-4130-93dc-119f95d08e50" containerName="extract" Jan 30 17:07:28 crc kubenswrapper[4875]: E0130 17:07:28.072578 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fa5454-ad47-4960-be87-5d9d4e4eab0f" containerName="console" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.072603 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fa5454-ad47-4960-be87-5d9d4e4eab0f" containerName="console" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.072716 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d7437b-5c96-4130-93dc-119f95d08e50" containerName="extract" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.072735 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fa5454-ad47-4960-be87-5d9d4e4eab0f" containerName="console" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.073159 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.075146 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.075314 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.075428 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-d9gg2" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.075623 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.075821 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.098767 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc"] Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.139352 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xlp9\" (UniqueName: \"kubernetes.io/projected/597e5eb9-1876-4309-b8e1-a870c946cfc0-kube-api-access-2xlp9\") pod \"metallb-operator-controller-manager-6f788d9fdf-mb5fc\" (UID: \"597e5eb9-1876-4309-b8e1-a870c946cfc0\") " pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.139419 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/597e5eb9-1876-4309-b8e1-a870c946cfc0-apiservice-cert\") pod \"metallb-operator-controller-manager-6f788d9fdf-mb5fc\" (UID: \"597e5eb9-1876-4309-b8e1-a870c946cfc0\") " pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.139449 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/597e5eb9-1876-4309-b8e1-a870c946cfc0-webhook-cert\") pod \"metallb-operator-controller-manager-6f788d9fdf-mb5fc\" (UID: \"597e5eb9-1876-4309-b8e1-a870c946cfc0\") " pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.240518 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xlp9\" (UniqueName: \"kubernetes.io/projected/597e5eb9-1876-4309-b8e1-a870c946cfc0-kube-api-access-2xlp9\") pod \"metallb-operator-controller-manager-6f788d9fdf-mb5fc\" (UID: \"597e5eb9-1876-4309-b8e1-a870c946cfc0\") " pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.240609 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/597e5eb9-1876-4309-b8e1-a870c946cfc0-apiservice-cert\") pod \"metallb-operator-controller-manager-6f788d9fdf-mb5fc\" (UID: \"597e5eb9-1876-4309-b8e1-a870c946cfc0\") " pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.240640 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/597e5eb9-1876-4309-b8e1-a870c946cfc0-webhook-cert\") pod \"metallb-operator-controller-manager-6f788d9fdf-mb5fc\" (UID: \"597e5eb9-1876-4309-b8e1-a870c946cfc0\") " pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.245741 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/597e5eb9-1876-4309-b8e1-a870c946cfc0-apiservice-cert\") pod \"metallb-operator-controller-manager-6f788d9fdf-mb5fc\" (UID: \"597e5eb9-1876-4309-b8e1-a870c946cfc0\") " pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.245763 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/597e5eb9-1876-4309-b8e1-a870c946cfc0-webhook-cert\") pod \"metallb-operator-controller-manager-6f788d9fdf-mb5fc\" (UID: \"597e5eb9-1876-4309-b8e1-a870c946cfc0\") " pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.255210 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xlp9\" (UniqueName: \"kubernetes.io/projected/597e5eb9-1876-4309-b8e1-a870c946cfc0-kube-api-access-2xlp9\") pod \"metallb-operator-controller-manager-6f788d9fdf-mb5fc\" (UID: \"597e5eb9-1876-4309-b8e1-a870c946cfc0\") " pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.306935 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx"] Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.307777 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.310712 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.310718 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.312525 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-pq87l" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.322558 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx"] Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.392853 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.443276 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdc69284-9636-490f-97ca-8e32af6b9144-webhook-cert\") pod \"metallb-operator-webhook-server-d45878f5b-stwlx\" (UID: \"bdc69284-9636-490f-97ca-8e32af6b9144\") " pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.443351 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdc69284-9636-490f-97ca-8e32af6b9144-apiservice-cert\") pod \"metallb-operator-webhook-server-d45878f5b-stwlx\" (UID: \"bdc69284-9636-490f-97ca-8e32af6b9144\") " pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.443375 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mck\" (UniqueName: \"kubernetes.io/projected/bdc69284-9636-490f-97ca-8e32af6b9144-kube-api-access-c6mck\") pod \"metallb-operator-webhook-server-d45878f5b-stwlx\" (UID: \"bdc69284-9636-490f-97ca-8e32af6b9144\") " pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.545237 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdc69284-9636-490f-97ca-8e32af6b9144-apiservice-cert\") pod \"metallb-operator-webhook-server-d45878f5b-stwlx\" (UID: \"bdc69284-9636-490f-97ca-8e32af6b9144\") " pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.545664 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6mck\" (UniqueName: \"kubernetes.io/projected/bdc69284-9636-490f-97ca-8e32af6b9144-kube-api-access-c6mck\") pod \"metallb-operator-webhook-server-d45878f5b-stwlx\" (UID: \"bdc69284-9636-490f-97ca-8e32af6b9144\") " pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.545739 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdc69284-9636-490f-97ca-8e32af6b9144-webhook-cert\") pod \"metallb-operator-webhook-server-d45878f5b-stwlx\" (UID: \"bdc69284-9636-490f-97ca-8e32af6b9144\") " pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.552416 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdc69284-9636-490f-97ca-8e32af6b9144-apiservice-cert\") pod \"metallb-operator-webhook-server-d45878f5b-stwlx\" (UID: \"bdc69284-9636-490f-97ca-8e32af6b9144\") " pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.563985 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdc69284-9636-490f-97ca-8e32af6b9144-webhook-cert\") pod \"metallb-operator-webhook-server-d45878f5b-stwlx\" (UID: \"bdc69284-9636-490f-97ca-8e32af6b9144\") " pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.566907 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6mck\" (UniqueName: \"kubernetes.io/projected/bdc69284-9636-490f-97ca-8e32af6b9144-kube-api-access-c6mck\") pod \"metallb-operator-webhook-server-d45878f5b-stwlx\" (UID: \"bdc69284-9636-490f-97ca-8e32af6b9144\") " pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.622490 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.677285 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc"] Jan 30 17:07:28 crc kubenswrapper[4875]: W0130 17:07:28.681689 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod597e5eb9_1876_4309_b8e1_a870c946cfc0.slice/crio-767b7f9c8f1f95f851bde3d5321241caa13601e589a2813b2ccc14f93a4c5823 WatchSource:0}: Error finding container 767b7f9c8f1f95f851bde3d5321241caa13601e589a2813b2ccc14f93a4c5823: Status 404 returned error can't find the container with id 767b7f9c8f1f95f851bde3d5321241caa13601e589a2813b2ccc14f93a4c5823 Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.722270 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" event={"ID":"597e5eb9-1876-4309-b8e1-a870c946cfc0","Type":"ContainerStarted","Data":"767b7f9c8f1f95f851bde3d5321241caa13601e589a2813b2ccc14f93a4c5823"} Jan 30 17:07:28 crc kubenswrapper[4875]: I0130 17:07:28.806069 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx"] Jan 30 17:07:28 crc kubenswrapper[4875]: W0130 17:07:28.815610 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdc69284_9636_490f_97ca_8e32af6b9144.slice/crio-01a6d429bd9da2acb79fc15d24e31a57c36a7917a651f495ccb06a40c885f624 WatchSource:0}: Error finding container 01a6d429bd9da2acb79fc15d24e31a57c36a7917a651f495ccb06a40c885f624: Status 404 returned error can't find the container with id 01a6d429bd9da2acb79fc15d24e31a57c36a7917a651f495ccb06a40c885f624 Jan 30 17:07:29 crc kubenswrapper[4875]: I0130 17:07:29.728830 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" event={"ID":"bdc69284-9636-490f-97ca-8e32af6b9144","Type":"ContainerStarted","Data":"01a6d429bd9da2acb79fc15d24e31a57c36a7917a651f495ccb06a40c885f624"} Jan 30 17:07:34 crc kubenswrapper[4875]: I0130 17:07:34.773170 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" event={"ID":"bdc69284-9636-490f-97ca-8e32af6b9144","Type":"ContainerStarted","Data":"13e7200d201c299a7f9f0d5d7198083bc8c2e87698656e20c8b68891bdb5089e"} Jan 30 17:07:34 crc kubenswrapper[4875]: I0130 17:07:34.773819 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" Jan 30 17:07:34 crc kubenswrapper[4875]: I0130 17:07:34.774434 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" event={"ID":"597e5eb9-1876-4309-b8e1-a870c946cfc0","Type":"ContainerStarted","Data":"692dc2f108e16fe3ead1fe67fb442d02de5d68dbf04b60992728f4e31c3b47e5"} Jan 30 17:07:34 crc kubenswrapper[4875]: I0130 17:07:34.775044 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" Jan 30 17:07:34 crc kubenswrapper[4875]: I0130 17:07:34.792468 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" podStartSLOduration=1.9780496680000001 podStartE2EDuration="6.792445712s" podCreationTimestamp="2026-01-30 17:07:28 +0000 UTC" firstStartedPulling="2026-01-30 17:07:28.824698845 +0000 UTC m=+659.372062228" lastFinishedPulling="2026-01-30 17:07:33.639094889 +0000 UTC m=+664.186458272" observedRunningTime="2026-01-30 17:07:34.790720188 +0000 UTC m=+665.338083571" watchObservedRunningTime="2026-01-30 17:07:34.792445712 +0000 UTC m=+665.339809095" Jan 30 17:07:34 crc kubenswrapper[4875]: I0130 17:07:34.810381 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" podStartSLOduration=1.859853755 podStartE2EDuration="6.810355851s" podCreationTimestamp="2026-01-30 17:07:28 +0000 UTC" firstStartedPulling="2026-01-30 17:07:28.684495418 +0000 UTC m=+659.231858801" lastFinishedPulling="2026-01-30 17:07:33.634997514 +0000 UTC m=+664.182360897" observedRunningTime="2026-01-30 17:07:34.809876496 +0000 UTC m=+665.357239899" watchObservedRunningTime="2026-01-30 17:07:34.810355851 +0000 UTC m=+665.357719254" Jan 30 17:07:48 crc kubenswrapper[4875]: I0130 17:07:48.626852 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-d45878f5b-stwlx" Jan 30 17:08:08 crc kubenswrapper[4875]: I0130 17:08:08.395962 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6f788d9fdf-mb5fc" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.217497 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-qznj9"] Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.220429 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.222923 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.223125 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.223402 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bc2pm" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.239441 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq"] Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.240184 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.243170 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.254243 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq"] Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.296636 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/099cb5be-6270-4a46-b135-560981a13b91-frr-startup\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.296680 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b726108f-6096-4549-a56e-4aaef276d309-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-l2qcq\" (UID: \"b726108f-6096-4549-a56e-4aaef276d309\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.296704 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/099cb5be-6270-4a46-b135-560981a13b91-metrics\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.296744 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/099cb5be-6270-4a46-b135-560981a13b91-frr-conf\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.296767 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/099cb5be-6270-4a46-b135-560981a13b91-metrics-certs\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.296783 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9666\" (UniqueName: \"kubernetes.io/projected/099cb5be-6270-4a46-b135-560981a13b91-kube-api-access-q9666\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.296806 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/099cb5be-6270-4a46-b135-560981a13b91-frr-sockets\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.296823 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/099cb5be-6270-4a46-b135-560981a13b91-reloader\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.296842 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zq6q\" (UniqueName: \"kubernetes.io/projected/b726108f-6096-4549-a56e-4aaef276d309-kube-api-access-5zq6q\") pod \"frr-k8s-webhook-server-7df86c4f6c-l2qcq\" (UID: \"b726108f-6096-4549-a56e-4aaef276d309\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.333095 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2t6jc"] Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.334303 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2t6jc" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.337089 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.337191 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-nvnv7" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.339355 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.339481 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.347435 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-5sf9s"] Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.348459 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-5sf9s" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.350716 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.359613 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-5sf9s"] Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399218 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/edcbb6f3-6630-4b11-a936-873403d63ecb-metallb-excludel2\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399273 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/099cb5be-6270-4a46-b135-560981a13b91-metrics-certs\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399297 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9666\" (UniqueName: \"kubernetes.io/projected/099cb5be-6270-4a46-b135-560981a13b91-kube-api-access-q9666\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399324 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/099cb5be-6270-4a46-b135-560981a13b91-frr-sockets\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399344 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/93d86069-0a11-45c8-8438-f10ddb9b0dc5-cert\") pod \"controller-6968d8fdc4-5sf9s\" (UID: \"93d86069-0a11-45c8-8438-f10ddb9b0dc5\") " pod="metallb-system/controller-6968d8fdc4-5sf9s" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399364 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/099cb5be-6270-4a46-b135-560981a13b91-reloader\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399385 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7klb\" (UniqueName: \"kubernetes.io/projected/edcbb6f3-6630-4b11-a936-873403d63ecb-kube-api-access-n7klb\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399404 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zq6q\" (UniqueName: \"kubernetes.io/projected/b726108f-6096-4549-a56e-4aaef276d309-kube-api-access-5zq6q\") pod \"frr-k8s-webhook-server-7df86c4f6c-l2qcq\" (UID: \"b726108f-6096-4549-a56e-4aaef276d309\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399421 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk6fv\" (UniqueName: \"kubernetes.io/projected/93d86069-0a11-45c8-8438-f10ddb9b0dc5-kube-api-access-pk6fv\") pod \"controller-6968d8fdc4-5sf9s\" (UID: \"93d86069-0a11-45c8-8438-f10ddb9b0dc5\") " pod="metallb-system/controller-6968d8fdc4-5sf9s" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399444 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-metrics-certs\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399465 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/099cb5be-6270-4a46-b135-560981a13b91-frr-startup\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399484 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b726108f-6096-4549-a56e-4aaef276d309-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-l2qcq\" (UID: \"b726108f-6096-4549-a56e-4aaef276d309\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399500 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93d86069-0a11-45c8-8438-f10ddb9b0dc5-metrics-certs\") pod \"controller-6968d8fdc4-5sf9s\" (UID: \"93d86069-0a11-45c8-8438-f10ddb9b0dc5\") " pod="metallb-system/controller-6968d8fdc4-5sf9s" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399519 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/099cb5be-6270-4a46-b135-560981a13b91-metrics\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399553 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/099cb5be-6270-4a46-b135-560981a13b91-frr-conf\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399596 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-memberlist\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399923 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/099cb5be-6270-4a46-b135-560981a13b91-frr-sockets\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.399956 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/099cb5be-6270-4a46-b135-560981a13b91-reloader\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: E0130 17:08:09.400073 4875 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 30 17:08:09 crc kubenswrapper[4875]: E0130 17:08:09.400129 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b726108f-6096-4549-a56e-4aaef276d309-cert podName:b726108f-6096-4549-a56e-4aaef276d309 nodeName:}" failed. No retries permitted until 2026-01-30 17:08:09.900112744 +0000 UTC m=+700.447476117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b726108f-6096-4549-a56e-4aaef276d309-cert") pod "frr-k8s-webhook-server-7df86c4f6c-l2qcq" (UID: "b726108f-6096-4549-a56e-4aaef276d309") : secret "frr-k8s-webhook-server-cert" not found Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.400395 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/099cb5be-6270-4a46-b135-560981a13b91-metrics\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.400524 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/099cb5be-6270-4a46-b135-560981a13b91-frr-conf\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.401091 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/099cb5be-6270-4a46-b135-560981a13b91-frr-startup\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.406832 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/099cb5be-6270-4a46-b135-560981a13b91-metrics-certs\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.415946 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9666\" (UniqueName: \"kubernetes.io/projected/099cb5be-6270-4a46-b135-560981a13b91-kube-api-access-q9666\") pod \"frr-k8s-qznj9\" (UID: \"099cb5be-6270-4a46-b135-560981a13b91\") " pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.417412 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zq6q\" (UniqueName: \"kubernetes.io/projected/b726108f-6096-4549-a56e-4aaef276d309-kube-api-access-5zq6q\") pod \"frr-k8s-webhook-server-7df86c4f6c-l2qcq\" (UID: \"b726108f-6096-4549-a56e-4aaef276d309\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.500900 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/93d86069-0a11-45c8-8438-f10ddb9b0dc5-cert\") pod \"controller-6968d8fdc4-5sf9s\" (UID: \"93d86069-0a11-45c8-8438-f10ddb9b0dc5\") " pod="metallb-system/controller-6968d8fdc4-5sf9s" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.500950 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7klb\" (UniqueName: \"kubernetes.io/projected/edcbb6f3-6630-4b11-a936-873403d63ecb-kube-api-access-n7klb\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.500973 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk6fv\" (UniqueName: \"kubernetes.io/projected/93d86069-0a11-45c8-8438-f10ddb9b0dc5-kube-api-access-pk6fv\") pod \"controller-6968d8fdc4-5sf9s\" (UID: \"93d86069-0a11-45c8-8438-f10ddb9b0dc5\") " pod="metallb-system/controller-6968d8fdc4-5sf9s" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.501012 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-metrics-certs\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.501049 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93d86069-0a11-45c8-8438-f10ddb9b0dc5-metrics-certs\") pod \"controller-6968d8fdc4-5sf9s\" (UID: \"93d86069-0a11-45c8-8438-f10ddb9b0dc5\") " pod="metallb-system/controller-6968d8fdc4-5sf9s" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.501094 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-memberlist\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.501113 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/edcbb6f3-6630-4b11-a936-873403d63ecb-metallb-excludel2\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:09 crc kubenswrapper[4875]: E0130 17:08:09.501171 4875 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 30 17:08:09 crc kubenswrapper[4875]: E0130 17:08:09.501234 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-metrics-certs podName:edcbb6f3-6630-4b11-a936-873403d63ecb nodeName:}" failed. No retries permitted until 2026-01-30 17:08:10.001219786 +0000 UTC m=+700.548583169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-metrics-certs") pod "speaker-2t6jc" (UID: "edcbb6f3-6630-4b11-a936-873403d63ecb") : secret "speaker-certs-secret" not found Jan 30 17:08:09 crc kubenswrapper[4875]: E0130 17:08:09.501632 4875 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 17:08:09 crc kubenswrapper[4875]: E0130 17:08:09.501662 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-memberlist podName:edcbb6f3-6630-4b11-a936-873403d63ecb nodeName:}" failed. No retries permitted until 2026-01-30 17:08:10.00165534 +0000 UTC m=+700.549018723 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-memberlist") pod "speaker-2t6jc" (UID: "edcbb6f3-6630-4b11-a936-873403d63ecb") : secret "metallb-memberlist" not found Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.501758 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/edcbb6f3-6630-4b11-a936-873403d63ecb-metallb-excludel2\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.502277 4875 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.504512 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93d86069-0a11-45c8-8438-f10ddb9b0dc5-metrics-certs\") pod \"controller-6968d8fdc4-5sf9s\" (UID: \"93d86069-0a11-45c8-8438-f10ddb9b0dc5\") " pod="metallb-system/controller-6968d8fdc4-5sf9s" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.518012 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/93d86069-0a11-45c8-8438-f10ddb9b0dc5-cert\") pod \"controller-6968d8fdc4-5sf9s\" (UID: \"93d86069-0a11-45c8-8438-f10ddb9b0dc5\") " pod="metallb-system/controller-6968d8fdc4-5sf9s" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.518295 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk6fv\" (UniqueName: \"kubernetes.io/projected/93d86069-0a11-45c8-8438-f10ddb9b0dc5-kube-api-access-pk6fv\") pod \"controller-6968d8fdc4-5sf9s\" (UID: \"93d86069-0a11-45c8-8438-f10ddb9b0dc5\") " pod="metallb-system/controller-6968d8fdc4-5sf9s" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.535193 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7klb\" (UniqueName: \"kubernetes.io/projected/edcbb6f3-6630-4b11-a936-873403d63ecb-kube-api-access-n7klb\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.538889 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.661765 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-5sf9s" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.852810 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-5sf9s"] Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.905745 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b726108f-6096-4549-a56e-4aaef276d309-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-l2qcq\" (UID: \"b726108f-6096-4549-a56e-4aaef276d309\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.912678 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b726108f-6096-4549-a56e-4aaef276d309-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-l2qcq\" (UID: \"b726108f-6096-4549-a56e-4aaef276d309\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.964854 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5sf9s" event={"ID":"93d86069-0a11-45c8-8438-f10ddb9b0dc5","Type":"ContainerStarted","Data":"b81d7596acdb829ef2c2ac6d7256e32717dcda7576d58bc3199163b0822c1651"} Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.964901 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5sf9s" event={"ID":"93d86069-0a11-45c8-8438-f10ddb9b0dc5","Type":"ContainerStarted","Data":"e7176552e00ce1030b72b04828592303538ac5cd8fc329598634ad19c8a9ea34"} Jan 30 17:08:09 crc kubenswrapper[4875]: I0130 17:08:09.967305 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qznj9" event={"ID":"099cb5be-6270-4a46-b135-560981a13b91","Type":"ContainerStarted","Data":"2ca19bbbaee423b09987e3347eb57b3dd777acb5d2bbde0c396c1433c8a846bf"} Jan 30 17:08:10 crc kubenswrapper[4875]: I0130 17:08:10.007688 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-metrics-certs\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:10 crc kubenswrapper[4875]: I0130 17:08:10.007803 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-memberlist\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:10 crc kubenswrapper[4875]: E0130 17:08:10.007929 4875 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 17:08:10 crc kubenswrapper[4875]: E0130 17:08:10.007988 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-memberlist podName:edcbb6f3-6630-4b11-a936-873403d63ecb nodeName:}" failed. No retries permitted until 2026-01-30 17:08:11.007968336 +0000 UTC m=+701.555331719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-memberlist") pod "speaker-2t6jc" (UID: "edcbb6f3-6630-4b11-a936-873403d63ecb") : secret "metallb-memberlist" not found Jan 30 17:08:10 crc kubenswrapper[4875]: I0130 17:08:10.011943 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-metrics-certs\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:10 crc kubenswrapper[4875]: I0130 17:08:10.158960 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" Jan 30 17:08:10 crc kubenswrapper[4875]: I0130 17:08:10.621926 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq"] Jan 30 17:08:10 crc kubenswrapper[4875]: W0130 17:08:10.630734 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb726108f_6096_4549_a56e_4aaef276d309.slice/crio-0aa85122dcbbf9b4038fadce762727f881bb1693549b77ce0ecfe2871cc2dc67 WatchSource:0}: Error finding container 0aa85122dcbbf9b4038fadce762727f881bb1693549b77ce0ecfe2871cc2dc67: Status 404 returned error can't find the container with id 0aa85122dcbbf9b4038fadce762727f881bb1693549b77ce0ecfe2871cc2dc67 Jan 30 17:08:10 crc kubenswrapper[4875]: I0130 17:08:10.975402 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-5sf9s" event={"ID":"93d86069-0a11-45c8-8438-f10ddb9b0dc5","Type":"ContainerStarted","Data":"e1c48d280adafaa262b50427e5df28125b5aecb03ed1759510eda0dbd9d6a3d1"} Jan 30 17:08:10 crc kubenswrapper[4875]: I0130 17:08:10.975694 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-5sf9s" Jan 30 17:08:10 crc kubenswrapper[4875]: I0130 17:08:10.979049 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" event={"ID":"b726108f-6096-4549-a56e-4aaef276d309","Type":"ContainerStarted","Data":"0aa85122dcbbf9b4038fadce762727f881bb1693549b77ce0ecfe2871cc2dc67"} Jan 30 17:08:11 crc kubenswrapper[4875]: I0130 17:08:11.026704 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-memberlist\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:11 crc kubenswrapper[4875]: I0130 17:08:11.040246 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/edcbb6f3-6630-4b11-a936-873403d63ecb-memberlist\") pod \"speaker-2t6jc\" (UID: \"edcbb6f3-6630-4b11-a936-873403d63ecb\") " pod="metallb-system/speaker-2t6jc" Jan 30 17:08:11 crc kubenswrapper[4875]: I0130 17:08:11.149366 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2t6jc" Jan 30 17:08:11 crc kubenswrapper[4875]: I0130 17:08:11.989115 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2t6jc" event={"ID":"edcbb6f3-6630-4b11-a936-873403d63ecb","Type":"ContainerStarted","Data":"a0a331949ff932df0c13e6177cab19a534b22624a4c894bf5cd4584046405d4b"} Jan 30 17:08:11 crc kubenswrapper[4875]: I0130 17:08:11.989478 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2t6jc" event={"ID":"edcbb6f3-6630-4b11-a936-873403d63ecb","Type":"ContainerStarted","Data":"410e310719c3fd5065c6cdc39848a12e77f71791d5b10f97af84857624c74d72"} Jan 30 17:08:11 crc kubenswrapper[4875]: I0130 17:08:11.989501 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2t6jc" event={"ID":"edcbb6f3-6630-4b11-a936-873403d63ecb","Type":"ContainerStarted","Data":"d59b9a16ffc23cd741138f747006e94453416f9eb01657dbd12e3294504ad3f0"} Jan 30 17:08:11 crc kubenswrapper[4875]: I0130 17:08:11.989764 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2t6jc" Jan 30 17:08:12 crc kubenswrapper[4875]: I0130 17:08:12.010906 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2t6jc" podStartSLOduration=3.010883193 podStartE2EDuration="3.010883193s" podCreationTimestamp="2026-01-30 17:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:08:12.005230233 +0000 UTC m=+702.552593626" watchObservedRunningTime="2026-01-30 17:08:12.010883193 +0000 UTC m=+702.558246586" Jan 30 17:08:12 crc kubenswrapper[4875]: I0130 17:08:12.011173 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-5sf9s" podStartSLOduration=3.011166965 podStartE2EDuration="3.011166965s" podCreationTimestamp="2026-01-30 17:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:08:11.014915578 +0000 UTC m=+701.562279001" watchObservedRunningTime="2026-01-30 17:08:12.011166965 +0000 UTC m=+702.558530348" Jan 30 17:08:18 crc kubenswrapper[4875]: I0130 17:08:18.031935 4875 generic.go:334] "Generic (PLEG): container finished" podID="099cb5be-6270-4a46-b135-560981a13b91" containerID="cdc917cbe6a3036fd69c7ecad983b538114bcdf69173784c50960b95c50f214d" exitCode=0 Jan 30 17:08:18 crc kubenswrapper[4875]: I0130 17:08:18.031985 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qznj9" event={"ID":"099cb5be-6270-4a46-b135-560981a13b91","Type":"ContainerDied","Data":"cdc917cbe6a3036fd69c7ecad983b538114bcdf69173784c50960b95c50f214d"} Jan 30 17:08:18 crc kubenswrapper[4875]: I0130 17:08:18.034011 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" event={"ID":"b726108f-6096-4549-a56e-4aaef276d309","Type":"ContainerStarted","Data":"1a47f096d8c6d79dbca2b3d8b67b921716bac28a3474a6b9959e4992018271c0"} Jan 30 17:08:18 crc kubenswrapper[4875]: I0130 17:08:18.034245 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" Jan 30 17:08:18 crc kubenswrapper[4875]: I0130 17:08:18.079755 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" podStartSLOduration=2.5745115739999997 podStartE2EDuration="9.079732412s" podCreationTimestamp="2026-01-30 17:08:09 +0000 UTC" firstStartedPulling="2026-01-30 17:08:10.633035949 +0000 UTC m=+701.180399342" lastFinishedPulling="2026-01-30 17:08:17.138256797 +0000 UTC m=+707.685620180" observedRunningTime="2026-01-30 17:08:18.071652951 +0000 UTC m=+708.619016364" watchObservedRunningTime="2026-01-30 17:08:18.079732412 +0000 UTC m=+708.627095795" Jan 30 17:08:19 crc kubenswrapper[4875]: I0130 17:08:19.041483 4875 generic.go:334] "Generic (PLEG): container finished" podID="099cb5be-6270-4a46-b135-560981a13b91" containerID="bc7134e9bd11882525917a80cc65dc9cc516a270da082efc52d37a50017ed237" exitCode=0 Jan 30 17:08:19 crc kubenswrapper[4875]: I0130 17:08:19.041607 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qznj9" event={"ID":"099cb5be-6270-4a46-b135-560981a13b91","Type":"ContainerDied","Data":"bc7134e9bd11882525917a80cc65dc9cc516a270da082efc52d37a50017ed237"} Jan 30 17:08:19 crc kubenswrapper[4875]: I0130 17:08:19.667729 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-5sf9s" Jan 30 17:08:20 crc kubenswrapper[4875]: I0130 17:08:20.051741 4875 generic.go:334] "Generic (PLEG): container finished" podID="099cb5be-6270-4a46-b135-560981a13b91" containerID="f0b2eda21b1baff355e2c0ea418fea861d5e84845af237846248acc4a6b5124e" exitCode=0 Jan 30 17:08:20 crc kubenswrapper[4875]: I0130 17:08:20.051814 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qznj9" event={"ID":"099cb5be-6270-4a46-b135-560981a13b91","Type":"ContainerDied","Data":"f0b2eda21b1baff355e2c0ea418fea861d5e84845af237846248acc4a6b5124e"} Jan 30 17:08:21 crc kubenswrapper[4875]: I0130 17:08:21.061349 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qznj9" event={"ID":"099cb5be-6270-4a46-b135-560981a13b91","Type":"ContainerStarted","Data":"28ec822e2eb011eeb82be19bcf105db9c0e4a2dffad2b239b3b86b8de4901967"} Jan 30 17:08:21 crc kubenswrapper[4875]: I0130 17:08:21.061781 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qznj9" event={"ID":"099cb5be-6270-4a46-b135-560981a13b91","Type":"ContainerStarted","Data":"5a9111a11c71362f0c3d7efc320fac17202bb3f8e348219bc00f58cad17c39bf"} Jan 30 17:08:21 crc kubenswrapper[4875]: I0130 17:08:21.061815 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:21 crc kubenswrapper[4875]: I0130 17:08:21.061836 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qznj9" event={"ID":"099cb5be-6270-4a46-b135-560981a13b91","Type":"ContainerStarted","Data":"2b27998dcabec7241f2d552aa5d00fa56ea97e24d3d5c2b6f64c9ab757525698"} Jan 30 17:08:21 crc kubenswrapper[4875]: I0130 17:08:21.061853 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qznj9" event={"ID":"099cb5be-6270-4a46-b135-560981a13b91","Type":"ContainerStarted","Data":"58e218917848a65d4845ec298d198c75d5f26a86ed5a17d71dde7811349bd338"} Jan 30 17:08:21 crc kubenswrapper[4875]: I0130 17:08:21.061870 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qznj9" event={"ID":"099cb5be-6270-4a46-b135-560981a13b91","Type":"ContainerStarted","Data":"a4089f80ff66d47c18a572dc427cfd4f4f80c3b6f8cc72d81416102e08126496"} Jan 30 17:08:21 crc kubenswrapper[4875]: I0130 17:08:21.061887 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qznj9" event={"ID":"099cb5be-6270-4a46-b135-560981a13b91","Type":"ContainerStarted","Data":"bce152e0ee56a40f4e2b98f0232c7e7e7af83aafabbc07b44fe3fc188ef65181"} Jan 30 17:08:21 crc kubenswrapper[4875]: I0130 17:08:21.083222 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-qznj9" podStartSLOduration=4.600065037 podStartE2EDuration="12.083205261s" podCreationTimestamp="2026-01-30 17:08:09 +0000 UTC" firstStartedPulling="2026-01-30 17:08:09.674003161 +0000 UTC m=+700.221366544" lastFinishedPulling="2026-01-30 17:08:17.157143385 +0000 UTC m=+707.704506768" observedRunningTime="2026-01-30 17:08:21.082043775 +0000 UTC m=+711.629407168" watchObservedRunningTime="2026-01-30 17:08:21.083205261 +0000 UTC m=+711.630568644" Jan 30 17:08:21 crc kubenswrapper[4875]: I0130 17:08:21.153158 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2t6jc" Jan 30 17:08:22 crc kubenswrapper[4875]: I0130 17:08:22.763897 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr"] Jan 30 17:08:22 crc kubenswrapper[4875]: I0130 17:08:22.765231 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" Jan 30 17:08:22 crc kubenswrapper[4875]: I0130 17:08:22.767480 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 17:08:22 crc kubenswrapper[4875]: I0130 17:08:22.773201 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6f44679-6e5c-49d2-b215-7af315008c79-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr\" (UID: \"f6f44679-6e5c-49d2-b215-7af315008c79\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" Jan 30 17:08:22 crc kubenswrapper[4875]: I0130 17:08:22.773272 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6f44679-6e5c-49d2-b215-7af315008c79-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr\" (UID: \"f6f44679-6e5c-49d2-b215-7af315008c79\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" Jan 30 17:08:22 crc kubenswrapper[4875]: I0130 17:08:22.773326 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfrjj\" (UniqueName: \"kubernetes.io/projected/f6f44679-6e5c-49d2-b215-7af315008c79-kube-api-access-dfrjj\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr\" (UID: \"f6f44679-6e5c-49d2-b215-7af315008c79\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" Jan 30 17:08:22 crc kubenswrapper[4875]: I0130 17:08:22.777004 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr"] Jan 30 17:08:22 crc kubenswrapper[4875]: I0130 17:08:22.874118 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6f44679-6e5c-49d2-b215-7af315008c79-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr\" (UID: \"f6f44679-6e5c-49d2-b215-7af315008c79\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" Jan 30 17:08:22 crc kubenswrapper[4875]: I0130 17:08:22.874206 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6f44679-6e5c-49d2-b215-7af315008c79-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr\" (UID: \"f6f44679-6e5c-49d2-b215-7af315008c79\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" Jan 30 17:08:22 crc kubenswrapper[4875]: I0130 17:08:22.874242 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfrjj\" (UniqueName: \"kubernetes.io/projected/f6f44679-6e5c-49d2-b215-7af315008c79-kube-api-access-dfrjj\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr\" (UID: \"f6f44679-6e5c-49d2-b215-7af315008c79\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" Jan 30 17:08:22 crc kubenswrapper[4875]: I0130 17:08:22.874734 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6f44679-6e5c-49d2-b215-7af315008c79-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr\" (UID: \"f6f44679-6e5c-49d2-b215-7af315008c79\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" Jan 30 17:08:22 crc kubenswrapper[4875]: I0130 17:08:22.874771 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6f44679-6e5c-49d2-b215-7af315008c79-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr\" (UID: \"f6f44679-6e5c-49d2-b215-7af315008c79\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" Jan 30 17:08:22 crc kubenswrapper[4875]: I0130 17:08:22.892508 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfrjj\" (UniqueName: \"kubernetes.io/projected/f6f44679-6e5c-49d2-b215-7af315008c79-kube-api-access-dfrjj\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr\" (UID: \"f6f44679-6e5c-49d2-b215-7af315008c79\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" Jan 30 17:08:23 crc kubenswrapper[4875]: I0130 17:08:23.122874 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" Jan 30 17:08:23 crc kubenswrapper[4875]: I0130 17:08:23.326442 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr"] Jan 30 17:08:23 crc kubenswrapper[4875]: W0130 17:08:23.333333 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6f44679_6e5c_49d2_b215_7af315008c79.slice/crio-12a94651593976957de57f2cf31388798503e15e5a3140a16e4c54912253b2b1 WatchSource:0}: Error finding container 12a94651593976957de57f2cf31388798503e15e5a3140a16e4c54912253b2b1: Status 404 returned error can't find the container with id 12a94651593976957de57f2cf31388798503e15e5a3140a16e4c54912253b2b1 Jan 30 17:08:24 crc kubenswrapper[4875]: I0130 17:08:24.081192 4875 generic.go:334] "Generic (PLEG): container finished" podID="f6f44679-6e5c-49d2-b215-7af315008c79" containerID="671e3b551c8715aa3421d2e1b2ac0e0496bd23031f715ecc0327e668770e1888" exitCode=0 Jan 30 17:08:24 crc kubenswrapper[4875]: I0130 17:08:24.081244 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" event={"ID":"f6f44679-6e5c-49d2-b215-7af315008c79","Type":"ContainerDied","Data":"671e3b551c8715aa3421d2e1b2ac0e0496bd23031f715ecc0327e668770e1888"} Jan 30 17:08:24 crc kubenswrapper[4875]: I0130 17:08:24.081472 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" event={"ID":"f6f44679-6e5c-49d2-b215-7af315008c79","Type":"ContainerStarted","Data":"12a94651593976957de57f2cf31388798503e15e5a3140a16e4c54912253b2b1"} Jan 30 17:08:24 crc kubenswrapper[4875]: I0130 17:08:24.539918 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:24 crc kubenswrapper[4875]: I0130 17:08:24.583774 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:26 crc kubenswrapper[4875]: I0130 17:08:26.287084 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:08:26 crc kubenswrapper[4875]: I0130 17:08:26.287453 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:08:27 crc kubenswrapper[4875]: I0130 17:08:27.098815 4875 generic.go:334] "Generic (PLEG): container finished" podID="f6f44679-6e5c-49d2-b215-7af315008c79" containerID="9784fd8094341b3316e90e152520e3fa719fdbcf307a9edaf8209c126fac5ccf" exitCode=0 Jan 30 17:08:27 crc kubenswrapper[4875]: I0130 17:08:27.098858 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" event={"ID":"f6f44679-6e5c-49d2-b215-7af315008c79","Type":"ContainerDied","Data":"9784fd8094341b3316e90e152520e3fa719fdbcf307a9edaf8209c126fac5ccf"} Jan 30 17:08:28 crc kubenswrapper[4875]: I0130 17:08:28.105379 4875 generic.go:334] "Generic (PLEG): container finished" podID="f6f44679-6e5c-49d2-b215-7af315008c79" containerID="92e9f2f583ad9912c0118cc8af7ff8a02b2d8a2cd010cba80431725b1a0102d6" exitCode=0 Jan 30 17:08:28 crc kubenswrapper[4875]: I0130 17:08:28.105442 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" event={"ID":"f6f44679-6e5c-49d2-b215-7af315008c79","Type":"ContainerDied","Data":"92e9f2f583ad9912c0118cc8af7ff8a02b2d8a2cd010cba80431725b1a0102d6"} Jan 30 17:08:29 crc kubenswrapper[4875]: I0130 17:08:29.378130 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" Jan 30 17:08:29 crc kubenswrapper[4875]: I0130 17:08:29.552717 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfrjj\" (UniqueName: \"kubernetes.io/projected/f6f44679-6e5c-49d2-b215-7af315008c79-kube-api-access-dfrjj\") pod \"f6f44679-6e5c-49d2-b215-7af315008c79\" (UID: \"f6f44679-6e5c-49d2-b215-7af315008c79\") " Jan 30 17:08:29 crc kubenswrapper[4875]: I0130 17:08:29.552769 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6f44679-6e5c-49d2-b215-7af315008c79-bundle\") pod \"f6f44679-6e5c-49d2-b215-7af315008c79\" (UID: \"f6f44679-6e5c-49d2-b215-7af315008c79\") " Jan 30 17:08:29 crc kubenswrapper[4875]: I0130 17:08:29.552843 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6f44679-6e5c-49d2-b215-7af315008c79-util\") pod \"f6f44679-6e5c-49d2-b215-7af315008c79\" (UID: \"f6f44679-6e5c-49d2-b215-7af315008c79\") " Jan 30 17:08:29 crc kubenswrapper[4875]: I0130 17:08:29.553722 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6f44679-6e5c-49d2-b215-7af315008c79-bundle" (OuterVolumeSpecName: "bundle") pod "f6f44679-6e5c-49d2-b215-7af315008c79" (UID: "f6f44679-6e5c-49d2-b215-7af315008c79"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:08:29 crc kubenswrapper[4875]: I0130 17:08:29.558577 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6f44679-6e5c-49d2-b215-7af315008c79-kube-api-access-dfrjj" (OuterVolumeSpecName: "kube-api-access-dfrjj") pod "f6f44679-6e5c-49d2-b215-7af315008c79" (UID: "f6f44679-6e5c-49d2-b215-7af315008c79"). InnerVolumeSpecName "kube-api-access-dfrjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:08:29 crc kubenswrapper[4875]: I0130 17:08:29.565524 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6f44679-6e5c-49d2-b215-7af315008c79-util" (OuterVolumeSpecName: "util") pod "f6f44679-6e5c-49d2-b215-7af315008c79" (UID: "f6f44679-6e5c-49d2-b215-7af315008c79"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:08:29 crc kubenswrapper[4875]: I0130 17:08:29.654240 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfrjj\" (UniqueName: \"kubernetes.io/projected/f6f44679-6e5c-49d2-b215-7af315008c79-kube-api-access-dfrjj\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:29 crc kubenswrapper[4875]: I0130 17:08:29.654318 4875 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f6f44679-6e5c-49d2-b215-7af315008c79-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:29 crc kubenswrapper[4875]: I0130 17:08:29.654327 4875 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f6f44679-6e5c-49d2-b215-7af315008c79-util\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:30 crc kubenswrapper[4875]: I0130 17:08:30.117461 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" event={"ID":"f6f44679-6e5c-49d2-b215-7af315008c79","Type":"ContainerDied","Data":"12a94651593976957de57f2cf31388798503e15e5a3140a16e4c54912253b2b1"} Jan 30 17:08:30 crc kubenswrapper[4875]: I0130 17:08:30.117511 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12a94651593976957de57f2cf31388798503e15e5a3140a16e4c54912253b2b1" Jan 30 17:08:30 crc kubenswrapper[4875]: I0130 17:08:30.117534 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr" Jan 30 17:08:30 crc kubenswrapper[4875]: I0130 17:08:30.163483 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-l2qcq" Jan 30 17:08:32 crc kubenswrapper[4875]: I0130 17:08:32.970203 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft"] Jan 30 17:08:32 crc kubenswrapper[4875]: E0130 17:08:32.971560 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f44679-6e5c-49d2-b215-7af315008c79" containerName="extract" Jan 30 17:08:32 crc kubenswrapper[4875]: I0130 17:08:32.971667 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f44679-6e5c-49d2-b215-7af315008c79" containerName="extract" Jan 30 17:08:32 crc kubenswrapper[4875]: E0130 17:08:32.971734 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f44679-6e5c-49d2-b215-7af315008c79" containerName="util" Jan 30 17:08:32 crc kubenswrapper[4875]: I0130 17:08:32.971784 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f44679-6e5c-49d2-b215-7af315008c79" containerName="util" Jan 30 17:08:32 crc kubenswrapper[4875]: E0130 17:08:32.971843 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f44679-6e5c-49d2-b215-7af315008c79" containerName="pull" Jan 30 17:08:32 crc kubenswrapper[4875]: I0130 17:08:32.971895 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f44679-6e5c-49d2-b215-7af315008c79" containerName="pull" Jan 30 17:08:32 crc kubenswrapper[4875]: I0130 17:08:32.972048 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6f44679-6e5c-49d2-b215-7af315008c79" containerName="extract" Jan 30 17:08:32 crc kubenswrapper[4875]: I0130 17:08:32.972508 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft" Jan 30 17:08:32 crc kubenswrapper[4875]: I0130 17:08:32.974616 4875 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8mzgl" Jan 30 17:08:32 crc kubenswrapper[4875]: I0130 17:08:32.975857 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 30 17:08:32 crc kubenswrapper[4875]: I0130 17:08:32.976093 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 30 17:08:32 crc kubenswrapper[4875]: I0130 17:08:32.986234 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft"] Jan 30 17:08:33 crc kubenswrapper[4875]: I0130 17:08:33.014552 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a385b266-57a7-4764-ba1d-79bbf44ed36a-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-96tft\" (UID: \"a385b266-57a7-4764-ba1d-79bbf44ed36a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft" Jan 30 17:08:33 crc kubenswrapper[4875]: I0130 17:08:33.014658 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99zkl\" (UniqueName: \"kubernetes.io/projected/a385b266-57a7-4764-ba1d-79bbf44ed36a-kube-api-access-99zkl\") pod \"cert-manager-operator-controller-manager-66c8bdd694-96tft\" (UID: \"a385b266-57a7-4764-ba1d-79bbf44ed36a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft" Jan 30 17:08:33 crc kubenswrapper[4875]: I0130 17:08:33.115644 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99zkl\" (UniqueName: \"kubernetes.io/projected/a385b266-57a7-4764-ba1d-79bbf44ed36a-kube-api-access-99zkl\") pod \"cert-manager-operator-controller-manager-66c8bdd694-96tft\" (UID: \"a385b266-57a7-4764-ba1d-79bbf44ed36a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft" Jan 30 17:08:33 crc kubenswrapper[4875]: I0130 17:08:33.115736 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a385b266-57a7-4764-ba1d-79bbf44ed36a-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-96tft\" (UID: \"a385b266-57a7-4764-ba1d-79bbf44ed36a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft" Jan 30 17:08:33 crc kubenswrapper[4875]: I0130 17:08:33.116264 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a385b266-57a7-4764-ba1d-79bbf44ed36a-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-96tft\" (UID: \"a385b266-57a7-4764-ba1d-79bbf44ed36a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft" Jan 30 17:08:33 crc kubenswrapper[4875]: I0130 17:08:33.145180 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99zkl\" (UniqueName: \"kubernetes.io/projected/a385b266-57a7-4764-ba1d-79bbf44ed36a-kube-api-access-99zkl\") pod \"cert-manager-operator-controller-manager-66c8bdd694-96tft\" (UID: \"a385b266-57a7-4764-ba1d-79bbf44ed36a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft" Jan 30 17:08:33 crc kubenswrapper[4875]: I0130 17:08:33.290520 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft" Jan 30 17:08:33 crc kubenswrapper[4875]: I0130 17:08:33.695255 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft"] Jan 30 17:08:34 crc kubenswrapper[4875]: I0130 17:08:34.141744 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft" event={"ID":"a385b266-57a7-4764-ba1d-79bbf44ed36a","Type":"ContainerStarted","Data":"28944a1070426b8c33891c7080d4bfb2227365d8acdf83bc00a8a2a46774b215"} Jan 30 17:08:38 crc kubenswrapper[4875]: I0130 17:08:38.156549 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft" event={"ID":"a385b266-57a7-4764-ba1d-79bbf44ed36a","Type":"ContainerStarted","Data":"b58f5f6e18d49398c76b8b5c2f7f8606032d948742d4caa13e5aae7b08c65110"} Jan 30 17:08:38 crc kubenswrapper[4875]: I0130 17:08:38.194077 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-96tft" podStartSLOduration=2.787016861 podStartE2EDuration="6.194055347s" podCreationTimestamp="2026-01-30 17:08:32 +0000 UTC" firstStartedPulling="2026-01-30 17:08:33.701372553 +0000 UTC m=+724.248735936" lastFinishedPulling="2026-01-30 17:08:37.108411039 +0000 UTC m=+727.655774422" observedRunningTime="2026-01-30 17:08:38.187597058 +0000 UTC m=+728.734960431" watchObservedRunningTime="2026-01-30 17:08:38.194055347 +0000 UTC m=+728.741418750" Jan 30 17:08:38 crc kubenswrapper[4875]: E0130 17:08:38.376476 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6f44679_6e5c_49d2_b215_7af315008c79.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:08:39 crc kubenswrapper[4875]: I0130 17:08:39.553674 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-qznj9" Jan 30 17:08:40 crc kubenswrapper[4875]: I0130 17:08:40.749900 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-swrxh"] Jan 30 17:08:40 crc kubenswrapper[4875]: I0130 17:08:40.750730 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-swrxh" Jan 30 17:08:40 crc kubenswrapper[4875]: I0130 17:08:40.752683 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 17:08:40 crc kubenswrapper[4875]: I0130 17:08:40.752909 4875 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-7zgxw" Jan 30 17:08:40 crc kubenswrapper[4875]: I0130 17:08:40.753086 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 17:08:40 crc kubenswrapper[4875]: I0130 17:08:40.762316 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-swrxh"] Jan 30 17:08:40 crc kubenswrapper[4875]: I0130 17:08:40.913389 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfmcd\" (UniqueName: \"kubernetes.io/projected/cff87141-71be-4df3-b630-9724d884f3ca-kube-api-access-tfmcd\") pod \"cert-manager-webhook-6888856db4-swrxh\" (UID: \"cff87141-71be-4df3-b630-9724d884f3ca\") " pod="cert-manager/cert-manager-webhook-6888856db4-swrxh" Jan 30 17:08:40 crc kubenswrapper[4875]: I0130 17:08:40.913523 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cff87141-71be-4df3-b630-9724d884f3ca-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-swrxh\" (UID: \"cff87141-71be-4df3-b630-9724d884f3ca\") " pod="cert-manager/cert-manager-webhook-6888856db4-swrxh" Jan 30 17:08:41 crc kubenswrapper[4875]: I0130 17:08:41.014566 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfmcd\" (UniqueName: \"kubernetes.io/projected/cff87141-71be-4df3-b630-9724d884f3ca-kube-api-access-tfmcd\") pod \"cert-manager-webhook-6888856db4-swrxh\" (UID: \"cff87141-71be-4df3-b630-9724d884f3ca\") " pod="cert-manager/cert-manager-webhook-6888856db4-swrxh" Jan 30 17:08:41 crc kubenswrapper[4875]: I0130 17:08:41.014673 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cff87141-71be-4df3-b630-9724d884f3ca-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-swrxh\" (UID: \"cff87141-71be-4df3-b630-9724d884f3ca\") " pod="cert-manager/cert-manager-webhook-6888856db4-swrxh" Jan 30 17:08:41 crc kubenswrapper[4875]: I0130 17:08:41.043796 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cff87141-71be-4df3-b630-9724d884f3ca-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-swrxh\" (UID: \"cff87141-71be-4df3-b630-9724d884f3ca\") " pod="cert-manager/cert-manager-webhook-6888856db4-swrxh" Jan 30 17:08:41 crc kubenswrapper[4875]: I0130 17:08:41.052474 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfmcd\" (UniqueName: \"kubernetes.io/projected/cff87141-71be-4df3-b630-9724d884f3ca-kube-api-access-tfmcd\") pod \"cert-manager-webhook-6888856db4-swrxh\" (UID: \"cff87141-71be-4df3-b630-9724d884f3ca\") " pod="cert-manager/cert-manager-webhook-6888856db4-swrxh" Jan 30 17:08:41 crc kubenswrapper[4875]: I0130 17:08:41.071911 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-swrxh" Jan 30 17:08:41 crc kubenswrapper[4875]: I0130 17:08:41.503199 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-swrxh"] Jan 30 17:08:42 crc kubenswrapper[4875]: I0130 17:08:42.192392 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-swrxh" event={"ID":"cff87141-71be-4df3-b630-9724d884f3ca","Type":"ContainerStarted","Data":"568c563adb7235217cdc549407e0251bdf01ccaf33d8c4a4a7a5ee4f99f6e7b4"} Jan 30 17:08:43 crc kubenswrapper[4875]: I0130 17:08:43.891502 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-vtttm"] Jan 30 17:08:43 crc kubenswrapper[4875]: I0130 17:08:43.892305 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-vtttm" Jan 30 17:08:43 crc kubenswrapper[4875]: I0130 17:08:43.895532 4875 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-nhrgm" Jan 30 17:08:43 crc kubenswrapper[4875]: I0130 17:08:43.904776 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-vtttm"] Jan 30 17:08:43 crc kubenswrapper[4875]: I0130 17:08:43.905384 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20f472cd-b250-40c1-bef3-3e32a16443a4-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-vtttm\" (UID: \"20f472cd-b250-40c1-bef3-3e32a16443a4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vtttm" Jan 30 17:08:43 crc kubenswrapper[4875]: I0130 17:08:43.905434 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96r9m\" (UniqueName: \"kubernetes.io/projected/20f472cd-b250-40c1-bef3-3e32a16443a4-kube-api-access-96r9m\") pod \"cert-manager-cainjector-5545bd876-vtttm\" (UID: \"20f472cd-b250-40c1-bef3-3e32a16443a4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vtttm" Jan 30 17:08:44 crc kubenswrapper[4875]: I0130 17:08:44.006188 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20f472cd-b250-40c1-bef3-3e32a16443a4-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-vtttm\" (UID: \"20f472cd-b250-40c1-bef3-3e32a16443a4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vtttm" Jan 30 17:08:44 crc kubenswrapper[4875]: I0130 17:08:44.006265 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96r9m\" (UniqueName: \"kubernetes.io/projected/20f472cd-b250-40c1-bef3-3e32a16443a4-kube-api-access-96r9m\") pod \"cert-manager-cainjector-5545bd876-vtttm\" (UID: \"20f472cd-b250-40c1-bef3-3e32a16443a4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vtttm" Jan 30 17:08:44 crc kubenswrapper[4875]: I0130 17:08:44.027665 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20f472cd-b250-40c1-bef3-3e32a16443a4-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-vtttm\" (UID: \"20f472cd-b250-40c1-bef3-3e32a16443a4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vtttm" Jan 30 17:08:44 crc kubenswrapper[4875]: I0130 17:08:44.027818 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96r9m\" (UniqueName: \"kubernetes.io/projected/20f472cd-b250-40c1-bef3-3e32a16443a4-kube-api-access-96r9m\") pod \"cert-manager-cainjector-5545bd876-vtttm\" (UID: \"20f472cd-b250-40c1-bef3-3e32a16443a4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vtttm" Jan 30 17:08:44 crc kubenswrapper[4875]: I0130 17:08:44.215194 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-vtttm" Jan 30 17:08:45 crc kubenswrapper[4875]: I0130 17:08:45.877296 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-vtttm"] Jan 30 17:08:45 crc kubenswrapper[4875]: W0130 17:08:45.884866 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20f472cd_b250_40c1_bef3_3e32a16443a4.slice/crio-31e4fa637fbf56e67ab641dbe04282a557fde82e4b601d81921596781516dbfc WatchSource:0}: Error finding container 31e4fa637fbf56e67ab641dbe04282a557fde82e4b601d81921596781516dbfc: Status 404 returned error can't find the container with id 31e4fa637fbf56e67ab641dbe04282a557fde82e4b601d81921596781516dbfc Jan 30 17:08:46 crc kubenswrapper[4875]: I0130 17:08:46.219036 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-swrxh" event={"ID":"cff87141-71be-4df3-b630-9724d884f3ca","Type":"ContainerStarted","Data":"3705127ba5bb79955d2a80777af72c653ad96faca829fa39ea9fbdd6fdaaced5"} Jan 30 17:08:46 crc kubenswrapper[4875]: I0130 17:08:46.219187 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-swrxh" Jan 30 17:08:46 crc kubenswrapper[4875]: I0130 17:08:46.221021 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-vtttm" event={"ID":"20f472cd-b250-40c1-bef3-3e32a16443a4","Type":"ContainerStarted","Data":"068372517f5c87f2959c73aa2a628f0bc18f70378a14c9fbee93f89d1e26beee"} Jan 30 17:08:46 crc kubenswrapper[4875]: I0130 17:08:46.221194 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-vtttm" event={"ID":"20f472cd-b250-40c1-bef3-3e32a16443a4","Type":"ContainerStarted","Data":"31e4fa637fbf56e67ab641dbe04282a557fde82e4b601d81921596781516dbfc"} Jan 30 17:08:46 crc kubenswrapper[4875]: I0130 17:08:46.239273 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-swrxh" podStartSLOduration=2.270708545 podStartE2EDuration="6.239255509s" podCreationTimestamp="2026-01-30 17:08:40 +0000 UTC" firstStartedPulling="2026-01-30 17:08:41.518676623 +0000 UTC m=+732.066040026" lastFinishedPulling="2026-01-30 17:08:45.487223607 +0000 UTC m=+736.034586990" observedRunningTime="2026-01-30 17:08:46.233918238 +0000 UTC m=+736.781281621" watchObservedRunningTime="2026-01-30 17:08:46.239255509 +0000 UTC m=+736.786618892" Jan 30 17:08:48 crc kubenswrapper[4875]: E0130 17:08:48.505512 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6f44679_6e5c_49d2_b215_7af315008c79.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:08:51 crc kubenswrapper[4875]: I0130 17:08:51.074968 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-swrxh" Jan 30 17:08:51 crc kubenswrapper[4875]: I0130 17:08:51.101286 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-vtttm" podStartSLOduration=8.101261313 podStartE2EDuration="8.101261313s" podCreationTimestamp="2026-01-30 17:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:08:46.256906459 +0000 UTC m=+736.804269842" watchObservedRunningTime="2026-01-30 17:08:51.101261313 +0000 UTC m=+741.648624716" Jan 30 17:08:56 crc kubenswrapper[4875]: I0130 17:08:56.287234 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:08:56 crc kubenswrapper[4875]: I0130 17:08:56.287557 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:08:58 crc kubenswrapper[4875]: E0130 17:08:58.664531 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6f44679_6e5c_49d2_b215_7af315008c79.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:08:59 crc kubenswrapper[4875]: I0130 17:08:59.664514 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-4nqc6"] Jan 30 17:08:59 crc kubenswrapper[4875]: I0130 17:08:59.665238 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-4nqc6" Jan 30 17:08:59 crc kubenswrapper[4875]: I0130 17:08:59.669035 4875 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-n2d4d" Jan 30 17:08:59 crc kubenswrapper[4875]: I0130 17:08:59.682819 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-4nqc6"] Jan 30 17:08:59 crc kubenswrapper[4875]: I0130 17:08:59.802800 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b355c16e-74db-4e9c-b779-6a921fff40fb-bound-sa-token\") pod \"cert-manager-545d4d4674-4nqc6\" (UID: \"b355c16e-74db-4e9c-b779-6a921fff40fb\") " pod="cert-manager/cert-manager-545d4d4674-4nqc6" Jan 30 17:08:59 crc kubenswrapper[4875]: I0130 17:08:59.802894 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn45p\" (UniqueName: \"kubernetes.io/projected/b355c16e-74db-4e9c-b779-6a921fff40fb-kube-api-access-dn45p\") pod \"cert-manager-545d4d4674-4nqc6\" (UID: \"b355c16e-74db-4e9c-b779-6a921fff40fb\") " pod="cert-manager/cert-manager-545d4d4674-4nqc6" Jan 30 17:08:59 crc kubenswrapper[4875]: I0130 17:08:59.904467 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b355c16e-74db-4e9c-b779-6a921fff40fb-bound-sa-token\") pod \"cert-manager-545d4d4674-4nqc6\" (UID: \"b355c16e-74db-4e9c-b779-6a921fff40fb\") " pod="cert-manager/cert-manager-545d4d4674-4nqc6" Jan 30 17:08:59 crc kubenswrapper[4875]: I0130 17:08:59.904619 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn45p\" (UniqueName: \"kubernetes.io/projected/b355c16e-74db-4e9c-b779-6a921fff40fb-kube-api-access-dn45p\") pod \"cert-manager-545d4d4674-4nqc6\" (UID: \"b355c16e-74db-4e9c-b779-6a921fff40fb\") " pod="cert-manager/cert-manager-545d4d4674-4nqc6" Jan 30 17:08:59 crc kubenswrapper[4875]: I0130 17:08:59.925736 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b355c16e-74db-4e9c-b779-6a921fff40fb-bound-sa-token\") pod \"cert-manager-545d4d4674-4nqc6\" (UID: \"b355c16e-74db-4e9c-b779-6a921fff40fb\") " pod="cert-manager/cert-manager-545d4d4674-4nqc6" Jan 30 17:08:59 crc kubenswrapper[4875]: I0130 17:08:59.927255 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn45p\" (UniqueName: \"kubernetes.io/projected/b355c16e-74db-4e9c-b779-6a921fff40fb-kube-api-access-dn45p\") pod \"cert-manager-545d4d4674-4nqc6\" (UID: \"b355c16e-74db-4e9c-b779-6a921fff40fb\") " pod="cert-manager/cert-manager-545d4d4674-4nqc6" Jan 30 17:08:59 crc kubenswrapper[4875]: I0130 17:08:59.988949 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-4nqc6" Jan 30 17:09:00 crc kubenswrapper[4875]: I0130 17:09:00.406172 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-4nqc6"] Jan 30 17:09:01 crc kubenswrapper[4875]: I0130 17:09:01.323825 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-4nqc6" event={"ID":"b355c16e-74db-4e9c-b779-6a921fff40fb","Type":"ContainerStarted","Data":"e883ad77bd6e634876a6055f481d825ddfb9c3dd5efb16d6c77b14f18e41f54b"} Jan 30 17:09:01 crc kubenswrapper[4875]: I0130 17:09:01.324471 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-4nqc6" event={"ID":"b355c16e-74db-4e9c-b779-6a921fff40fb","Type":"ContainerStarted","Data":"d098116c06c33ce6dbf21b3410e8208943a8ce9429fa70cec7f1890debf513f6"} Jan 30 17:09:01 crc kubenswrapper[4875]: I0130 17:09:01.345344 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-4nqc6" podStartSLOduration=2.34532165 podStartE2EDuration="2.34532165s" podCreationTimestamp="2026-01-30 17:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:09:01.341359825 +0000 UTC m=+751.888723248" watchObservedRunningTime="2026-01-30 17:09:01.34532165 +0000 UTC m=+751.892685043" Jan 30 17:09:06 crc kubenswrapper[4875]: I0130 17:09:06.491364 4875 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 17:09:08 crc kubenswrapper[4875]: E0130 17:09:08.822357 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6f44679_6e5c_49d2_b215_7af315008c79.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:09:10 crc kubenswrapper[4875]: I0130 17:09:10.365379 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mrc6v"] Jan 30 17:09:10 crc kubenswrapper[4875]: I0130 17:09:10.367135 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mrc6v" Jan 30 17:09:10 crc kubenswrapper[4875]: I0130 17:09:10.374027 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 17:09:10 crc kubenswrapper[4875]: I0130 17:09:10.374309 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-m7qlp" Jan 30 17:09:10 crc kubenswrapper[4875]: I0130 17:09:10.392103 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 17:09:10 crc kubenswrapper[4875]: I0130 17:09:10.401895 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mrc6v"] Jan 30 17:09:10 crc kubenswrapper[4875]: I0130 17:09:10.567571 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcjqj\" (UniqueName: \"kubernetes.io/projected/a3f17cd3-67f0-48f8-b5bb-08d1b027188e-kube-api-access-kcjqj\") pod \"openstack-operator-index-mrc6v\" (UID: \"a3f17cd3-67f0-48f8-b5bb-08d1b027188e\") " pod="openstack-operators/openstack-operator-index-mrc6v" Jan 30 17:09:10 crc kubenswrapper[4875]: I0130 17:09:10.668984 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcjqj\" (UniqueName: \"kubernetes.io/projected/a3f17cd3-67f0-48f8-b5bb-08d1b027188e-kube-api-access-kcjqj\") pod \"openstack-operator-index-mrc6v\" (UID: \"a3f17cd3-67f0-48f8-b5bb-08d1b027188e\") " pod="openstack-operators/openstack-operator-index-mrc6v" Jan 30 17:09:10 crc kubenswrapper[4875]: I0130 17:09:10.688979 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcjqj\" (UniqueName: \"kubernetes.io/projected/a3f17cd3-67f0-48f8-b5bb-08d1b027188e-kube-api-access-kcjqj\") pod \"openstack-operator-index-mrc6v\" (UID: \"a3f17cd3-67f0-48f8-b5bb-08d1b027188e\") " pod="openstack-operators/openstack-operator-index-mrc6v" Jan 30 17:09:10 crc kubenswrapper[4875]: I0130 17:09:10.713950 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mrc6v" Jan 30 17:09:11 crc kubenswrapper[4875]: I0130 17:09:11.130004 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mrc6v"] Jan 30 17:09:11 crc kubenswrapper[4875]: I0130 17:09:11.397466 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mrc6v" event={"ID":"a3f17cd3-67f0-48f8-b5bb-08d1b027188e","Type":"ContainerStarted","Data":"c80b13175081ab3bd6f6f5cc93e89d6362eeead832bcaf031f289621885a767f"} Jan 30 17:09:13 crc kubenswrapper[4875]: I0130 17:09:13.411492 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mrc6v" event={"ID":"a3f17cd3-67f0-48f8-b5bb-08d1b027188e","Type":"ContainerStarted","Data":"32f8390b42da1d0297a7a16bd27f65ccee13382c2e868880dff09be1986879dd"} Jan 30 17:09:13 crc kubenswrapper[4875]: I0130 17:09:13.431630 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mrc6v" podStartSLOduration=1.545624397 podStartE2EDuration="3.431606011s" podCreationTimestamp="2026-01-30 17:09:10 +0000 UTC" firstStartedPulling="2026-01-30 17:09:11.133731283 +0000 UTC m=+761.681094666" lastFinishedPulling="2026-01-30 17:09:13.019712897 +0000 UTC m=+763.567076280" observedRunningTime="2026-01-30 17:09:13.427669998 +0000 UTC m=+763.975033381" watchObservedRunningTime="2026-01-30 17:09:13.431606011 +0000 UTC m=+763.978969424" Jan 30 17:09:13 crc kubenswrapper[4875]: I0130 17:09:13.724295 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mrc6v"] Jan 30 17:09:14 crc kubenswrapper[4875]: I0130 17:09:14.330978 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wdth9"] Jan 30 17:09:14 crc kubenswrapper[4875]: I0130 17:09:14.332535 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wdth9" Jan 30 17:09:14 crc kubenswrapper[4875]: I0130 17:09:14.345178 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wdth9"] Jan 30 17:09:14 crc kubenswrapper[4875]: I0130 17:09:14.421289 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzrf7\" (UniqueName: \"kubernetes.io/projected/90d2ca44-318f-4c47-8a9e-2781ac1151e6-kube-api-access-gzrf7\") pod \"openstack-operator-index-wdth9\" (UID: \"90d2ca44-318f-4c47-8a9e-2781ac1151e6\") " pod="openstack-operators/openstack-operator-index-wdth9" Jan 30 17:09:14 crc kubenswrapper[4875]: I0130 17:09:14.522770 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzrf7\" (UniqueName: \"kubernetes.io/projected/90d2ca44-318f-4c47-8a9e-2781ac1151e6-kube-api-access-gzrf7\") pod \"openstack-operator-index-wdth9\" (UID: \"90d2ca44-318f-4c47-8a9e-2781ac1151e6\") " pod="openstack-operators/openstack-operator-index-wdth9" Jan 30 17:09:14 crc kubenswrapper[4875]: I0130 17:09:14.549863 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzrf7\" (UniqueName: \"kubernetes.io/projected/90d2ca44-318f-4c47-8a9e-2781ac1151e6-kube-api-access-gzrf7\") pod \"openstack-operator-index-wdth9\" (UID: \"90d2ca44-318f-4c47-8a9e-2781ac1151e6\") " pod="openstack-operators/openstack-operator-index-wdth9" Jan 30 17:09:14 crc kubenswrapper[4875]: I0130 17:09:14.650838 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wdth9" Jan 30 17:09:14 crc kubenswrapper[4875]: I0130 17:09:14.854870 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wdth9"] Jan 30 17:09:14 crc kubenswrapper[4875]: W0130 17:09:14.864470 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90d2ca44_318f_4c47_8a9e_2781ac1151e6.slice/crio-ce5ef107ef476c03ab7461729c11191a30e9fb4d07e10926d315973589f1a518 WatchSource:0}: Error finding container ce5ef107ef476c03ab7461729c11191a30e9fb4d07e10926d315973589f1a518: Status 404 returned error can't find the container with id ce5ef107ef476c03ab7461729c11191a30e9fb4d07e10926d315973589f1a518 Jan 30 17:09:15 crc kubenswrapper[4875]: I0130 17:09:15.427169 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wdth9" event={"ID":"90d2ca44-318f-4c47-8a9e-2781ac1151e6","Type":"ContainerStarted","Data":"425bfdd1f856c2a030abf4738ca9d9cb2e3709fcd8be496b6394192d9e48cfc6"} Jan 30 17:09:15 crc kubenswrapper[4875]: I0130 17:09:15.427241 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wdth9" event={"ID":"90d2ca44-318f-4c47-8a9e-2781ac1151e6","Type":"ContainerStarted","Data":"ce5ef107ef476c03ab7461729c11191a30e9fb4d07e10926d315973589f1a518"} Jan 30 17:09:15 crc kubenswrapper[4875]: I0130 17:09:15.427228 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-mrc6v" podUID="a3f17cd3-67f0-48f8-b5bb-08d1b027188e" containerName="registry-server" containerID="cri-o://32f8390b42da1d0297a7a16bd27f65ccee13382c2e868880dff09be1986879dd" gracePeriod=2 Jan 30 17:09:15 crc kubenswrapper[4875]: I0130 17:09:15.457451 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wdth9" podStartSLOduration=1.406422264 podStartE2EDuration="1.457428926s" podCreationTimestamp="2026-01-30 17:09:14 +0000 UTC" firstStartedPulling="2026-01-30 17:09:14.867433149 +0000 UTC m=+765.414796522" lastFinishedPulling="2026-01-30 17:09:14.918439801 +0000 UTC m=+765.465803184" observedRunningTime="2026-01-30 17:09:15.454996713 +0000 UTC m=+766.002360176" watchObservedRunningTime="2026-01-30 17:09:15.457428926 +0000 UTC m=+766.004792309" Jan 30 17:09:15 crc kubenswrapper[4875]: I0130 17:09:15.784495 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mrc6v" Jan 30 17:09:15 crc kubenswrapper[4875]: I0130 17:09:15.859011 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcjqj\" (UniqueName: \"kubernetes.io/projected/a3f17cd3-67f0-48f8-b5bb-08d1b027188e-kube-api-access-kcjqj\") pod \"a3f17cd3-67f0-48f8-b5bb-08d1b027188e\" (UID: \"a3f17cd3-67f0-48f8-b5bb-08d1b027188e\") " Jan 30 17:09:15 crc kubenswrapper[4875]: I0130 17:09:15.864286 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f17cd3-67f0-48f8-b5bb-08d1b027188e-kube-api-access-kcjqj" (OuterVolumeSpecName: "kube-api-access-kcjqj") pod "a3f17cd3-67f0-48f8-b5bb-08d1b027188e" (UID: "a3f17cd3-67f0-48f8-b5bb-08d1b027188e"). InnerVolumeSpecName "kube-api-access-kcjqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:09:15 crc kubenswrapper[4875]: I0130 17:09:15.960789 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcjqj\" (UniqueName: \"kubernetes.io/projected/a3f17cd3-67f0-48f8-b5bb-08d1b027188e-kube-api-access-kcjqj\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:16 crc kubenswrapper[4875]: I0130 17:09:16.446067 4875 generic.go:334] "Generic (PLEG): container finished" podID="a3f17cd3-67f0-48f8-b5bb-08d1b027188e" containerID="32f8390b42da1d0297a7a16bd27f65ccee13382c2e868880dff09be1986879dd" exitCode=0 Jan 30 17:09:16 crc kubenswrapper[4875]: I0130 17:09:16.446109 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mrc6v" event={"ID":"a3f17cd3-67f0-48f8-b5bb-08d1b027188e","Type":"ContainerDied","Data":"32f8390b42da1d0297a7a16bd27f65ccee13382c2e868880dff09be1986879dd"} Jan 30 17:09:16 crc kubenswrapper[4875]: I0130 17:09:16.446152 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mrc6v" Jan 30 17:09:16 crc kubenswrapper[4875]: I0130 17:09:16.446177 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mrc6v" event={"ID":"a3f17cd3-67f0-48f8-b5bb-08d1b027188e","Type":"ContainerDied","Data":"c80b13175081ab3bd6f6f5cc93e89d6362eeead832bcaf031f289621885a767f"} Jan 30 17:09:16 crc kubenswrapper[4875]: I0130 17:09:16.446209 4875 scope.go:117] "RemoveContainer" containerID="32f8390b42da1d0297a7a16bd27f65ccee13382c2e868880dff09be1986879dd" Jan 30 17:09:16 crc kubenswrapper[4875]: I0130 17:09:16.467252 4875 scope.go:117] "RemoveContainer" containerID="32f8390b42da1d0297a7a16bd27f65ccee13382c2e868880dff09be1986879dd" Jan 30 17:09:16 crc kubenswrapper[4875]: E0130 17:09:16.467699 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f8390b42da1d0297a7a16bd27f65ccee13382c2e868880dff09be1986879dd\": container with ID starting with 32f8390b42da1d0297a7a16bd27f65ccee13382c2e868880dff09be1986879dd not found: ID does not exist" containerID="32f8390b42da1d0297a7a16bd27f65ccee13382c2e868880dff09be1986879dd" Jan 30 17:09:16 crc kubenswrapper[4875]: I0130 17:09:16.467730 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f8390b42da1d0297a7a16bd27f65ccee13382c2e868880dff09be1986879dd"} err="failed to get container status \"32f8390b42da1d0297a7a16bd27f65ccee13382c2e868880dff09be1986879dd\": rpc error: code = NotFound desc = could not find container \"32f8390b42da1d0297a7a16bd27f65ccee13382c2e868880dff09be1986879dd\": container with ID starting with 32f8390b42da1d0297a7a16bd27f65ccee13382c2e868880dff09be1986879dd not found: ID does not exist" Jan 30 17:09:16 crc kubenswrapper[4875]: I0130 17:09:16.470465 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mrc6v"] Jan 30 17:09:16 crc kubenswrapper[4875]: I0130 17:09:16.475478 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-mrc6v"] Jan 30 17:09:18 crc kubenswrapper[4875]: I0130 17:09:18.144307 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3f17cd3-67f0-48f8-b5bb-08d1b027188e" path="/var/lib/kubelet/pods/a3f17cd3-67f0-48f8-b5bb-08d1b027188e/volumes" Jan 30 17:09:18 crc kubenswrapper[4875]: E0130 17:09:18.961404 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6f44679_6e5c_49d2_b215_7af315008c79.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:09:19 crc kubenswrapper[4875]: I0130 17:09:19.933187 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kccg6"] Jan 30 17:09:19 crc kubenswrapper[4875]: E0130 17:09:19.933738 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3f17cd3-67f0-48f8-b5bb-08d1b027188e" containerName="registry-server" Jan 30 17:09:19 crc kubenswrapper[4875]: I0130 17:09:19.933753 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3f17cd3-67f0-48f8-b5bb-08d1b027188e" containerName="registry-server" Jan 30 17:09:19 crc kubenswrapper[4875]: I0130 17:09:19.933866 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3f17cd3-67f0-48f8-b5bb-08d1b027188e" containerName="registry-server" Jan 30 17:09:19 crc kubenswrapper[4875]: I0130 17:09:19.934714 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:19 crc kubenswrapper[4875]: I0130 17:09:19.946465 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kccg6"] Jan 30 17:09:20 crc kubenswrapper[4875]: I0130 17:09:20.111291 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4265bc9-c7ff-478b-a97f-0c6227b26173-utilities\") pod \"community-operators-kccg6\" (UID: \"e4265bc9-c7ff-478b-a97f-0c6227b26173\") " pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:20 crc kubenswrapper[4875]: I0130 17:09:20.111536 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4265bc9-c7ff-478b-a97f-0c6227b26173-catalog-content\") pod \"community-operators-kccg6\" (UID: \"e4265bc9-c7ff-478b-a97f-0c6227b26173\") " pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:20 crc kubenswrapper[4875]: I0130 17:09:20.111660 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72t6l\" (UniqueName: \"kubernetes.io/projected/e4265bc9-c7ff-478b-a97f-0c6227b26173-kube-api-access-72t6l\") pod \"community-operators-kccg6\" (UID: \"e4265bc9-c7ff-478b-a97f-0c6227b26173\") " pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:20 crc kubenswrapper[4875]: I0130 17:09:20.212976 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4265bc9-c7ff-478b-a97f-0c6227b26173-utilities\") pod \"community-operators-kccg6\" (UID: \"e4265bc9-c7ff-478b-a97f-0c6227b26173\") " pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:20 crc kubenswrapper[4875]: I0130 17:09:20.213076 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4265bc9-c7ff-478b-a97f-0c6227b26173-catalog-content\") pod \"community-operators-kccg6\" (UID: \"e4265bc9-c7ff-478b-a97f-0c6227b26173\") " pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:20 crc kubenswrapper[4875]: I0130 17:09:20.213098 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72t6l\" (UniqueName: \"kubernetes.io/projected/e4265bc9-c7ff-478b-a97f-0c6227b26173-kube-api-access-72t6l\") pod \"community-operators-kccg6\" (UID: \"e4265bc9-c7ff-478b-a97f-0c6227b26173\") " pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:20 crc kubenswrapper[4875]: I0130 17:09:20.213787 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4265bc9-c7ff-478b-a97f-0c6227b26173-utilities\") pod \"community-operators-kccg6\" (UID: \"e4265bc9-c7ff-478b-a97f-0c6227b26173\") " pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:20 crc kubenswrapper[4875]: I0130 17:09:20.213928 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4265bc9-c7ff-478b-a97f-0c6227b26173-catalog-content\") pod \"community-operators-kccg6\" (UID: \"e4265bc9-c7ff-478b-a97f-0c6227b26173\") " pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:20 crc kubenswrapper[4875]: I0130 17:09:20.232518 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72t6l\" (UniqueName: \"kubernetes.io/projected/e4265bc9-c7ff-478b-a97f-0c6227b26173-kube-api-access-72t6l\") pod \"community-operators-kccg6\" (UID: \"e4265bc9-c7ff-478b-a97f-0c6227b26173\") " pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:20 crc kubenswrapper[4875]: I0130 17:09:20.268759 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:20 crc kubenswrapper[4875]: I0130 17:09:20.688648 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kccg6"] Jan 30 17:09:21 crc kubenswrapper[4875]: I0130 17:09:21.477166 4875 generic.go:334] "Generic (PLEG): container finished" podID="e4265bc9-c7ff-478b-a97f-0c6227b26173" containerID="d3455922feb76d66f9823a96627429ef61258ccc7bdea561471532fe321ff60c" exitCode=0 Jan 30 17:09:21 crc kubenswrapper[4875]: I0130 17:09:21.477211 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kccg6" event={"ID":"e4265bc9-c7ff-478b-a97f-0c6227b26173","Type":"ContainerDied","Data":"d3455922feb76d66f9823a96627429ef61258ccc7bdea561471532fe321ff60c"} Jan 30 17:09:21 crc kubenswrapper[4875]: I0130 17:09:21.477453 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kccg6" event={"ID":"e4265bc9-c7ff-478b-a97f-0c6227b26173","Type":"ContainerStarted","Data":"793a031d8c0ab8757078995650ca0c90a9057c5249a2991becbc30bfa428b82c"} Jan 30 17:09:22 crc kubenswrapper[4875]: I0130 17:09:22.485514 4875 generic.go:334] "Generic (PLEG): container finished" podID="e4265bc9-c7ff-478b-a97f-0c6227b26173" containerID="a81e326b5e0162e1ee15311267a723167a6a7f4e6e1c205e5ced33e9ae363c84" exitCode=0 Jan 30 17:09:22 crc kubenswrapper[4875]: I0130 17:09:22.485634 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kccg6" event={"ID":"e4265bc9-c7ff-478b-a97f-0c6227b26173","Type":"ContainerDied","Data":"a81e326b5e0162e1ee15311267a723167a6a7f4e6e1c205e5ced33e9ae363c84"} Jan 30 17:09:23 crc kubenswrapper[4875]: I0130 17:09:23.494341 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kccg6" event={"ID":"e4265bc9-c7ff-478b-a97f-0c6227b26173","Type":"ContainerStarted","Data":"cf92cab152a0b7db67ca1ed38dac38383a9db07dfa3646f38e0b0055516f7848"} Jan 30 17:09:24 crc kubenswrapper[4875]: I0130 17:09:24.651065 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wdth9" Jan 30 17:09:24 crc kubenswrapper[4875]: I0130 17:09:24.651450 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-wdth9" Jan 30 17:09:24 crc kubenswrapper[4875]: I0130 17:09:24.677826 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-wdth9" Jan 30 17:09:24 crc kubenswrapper[4875]: I0130 17:09:24.693448 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kccg6" podStartSLOduration=4.143172005 podStartE2EDuration="5.69343173s" podCreationTimestamp="2026-01-30 17:09:19 +0000 UTC" firstStartedPulling="2026-01-30 17:09:21.478887625 +0000 UTC m=+772.026251028" lastFinishedPulling="2026-01-30 17:09:23.02914737 +0000 UTC m=+773.576510753" observedRunningTime="2026-01-30 17:09:23.513830889 +0000 UTC m=+774.061194272" watchObservedRunningTime="2026-01-30 17:09:24.69343173 +0000 UTC m=+775.240795113" Jan 30 17:09:25 crc kubenswrapper[4875]: I0130 17:09:25.532637 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-wdth9" Jan 30 17:09:26 crc kubenswrapper[4875]: I0130 17:09:26.287888 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:09:26 crc kubenswrapper[4875]: I0130 17:09:26.287992 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:09:26 crc kubenswrapper[4875]: I0130 17:09:26.288068 4875 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 17:09:26 crc kubenswrapper[4875]: I0130 17:09:26.289119 4875 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44cbbe2347c99f305a77309b497f459a3e30dcbc1e853b9af4c1697fcc292f86"} pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:09:26 crc kubenswrapper[4875]: I0130 17:09:26.289227 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" containerID="cri-o://44cbbe2347c99f305a77309b497f459a3e30dcbc1e853b9af4c1697fcc292f86" gracePeriod=600 Jan 30 17:09:26 crc kubenswrapper[4875]: I0130 17:09:26.514294 4875 generic.go:334] "Generic (PLEG): container finished" podID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerID="44cbbe2347c99f305a77309b497f459a3e30dcbc1e853b9af4c1697fcc292f86" exitCode=0 Jan 30 17:09:26 crc kubenswrapper[4875]: I0130 17:09:26.514404 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerDied","Data":"44cbbe2347c99f305a77309b497f459a3e30dcbc1e853b9af4c1697fcc292f86"} Jan 30 17:09:26 crc kubenswrapper[4875]: I0130 17:09:26.514741 4875 scope.go:117] "RemoveContainer" containerID="ea4fc173ca1c7737282f76b497b93072de498c51c422171abc059436c0e39c75" Jan 30 17:09:27 crc kubenswrapper[4875]: I0130 17:09:27.527400 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerStarted","Data":"ed42a4c14dffd4d7e8ff0992005f668baba6e088536dd037290ec2423738d85a"} Jan 30 17:09:29 crc kubenswrapper[4875]: E0130 17:09:29.099518 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6f44679_6e5c_49d2_b215_7af315008c79.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:09:30 crc kubenswrapper[4875]: I0130 17:09:30.269151 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:30 crc kubenswrapper[4875]: I0130 17:09:30.269236 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:30 crc kubenswrapper[4875]: I0130 17:09:30.314299 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:30 crc kubenswrapper[4875]: I0130 17:09:30.593425 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.157057 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd"] Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.158655 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.161228 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-g8j98" Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.169239 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd"] Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.179726 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5b461b0-718a-4065-bf1d-db2860d2af04-util\") pod \"c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd\" (UID: \"f5b461b0-718a-4065-bf1d-db2860d2af04\") " pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.179864 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5b461b0-718a-4065-bf1d-db2860d2af04-bundle\") pod \"c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd\" (UID: \"f5b461b0-718a-4065-bf1d-db2860d2af04\") " pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.179922 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwnj\" (UniqueName: \"kubernetes.io/projected/f5b461b0-718a-4065-bf1d-db2860d2af04-kube-api-access-njwnj\") pod \"c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd\" (UID: \"f5b461b0-718a-4065-bf1d-db2860d2af04\") " pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.281208 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5b461b0-718a-4065-bf1d-db2860d2af04-bundle\") pod \"c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd\" (UID: \"f5b461b0-718a-4065-bf1d-db2860d2af04\") " pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.281334 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njwnj\" (UniqueName: \"kubernetes.io/projected/f5b461b0-718a-4065-bf1d-db2860d2af04-kube-api-access-njwnj\") pod \"c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd\" (UID: \"f5b461b0-718a-4065-bf1d-db2860d2af04\") " pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.281390 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5b461b0-718a-4065-bf1d-db2860d2af04-util\") pod \"c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd\" (UID: \"f5b461b0-718a-4065-bf1d-db2860d2af04\") " pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.282063 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5b461b0-718a-4065-bf1d-db2860d2af04-bundle\") pod \"c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd\" (UID: \"f5b461b0-718a-4065-bf1d-db2860d2af04\") " pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.282110 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5b461b0-718a-4065-bf1d-db2860d2af04-util\") pod \"c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd\" (UID: \"f5b461b0-718a-4065-bf1d-db2860d2af04\") " pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.299520 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njwnj\" (UniqueName: \"kubernetes.io/projected/f5b461b0-718a-4065-bf1d-db2860d2af04-kube-api-access-njwnj\") pod \"c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd\" (UID: \"f5b461b0-718a-4065-bf1d-db2860d2af04\") " pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.479285 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" Jan 30 17:09:32 crc kubenswrapper[4875]: I0130 17:09:32.856409 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd"] Jan 30 17:09:32 crc kubenswrapper[4875]: W0130 17:09:32.858495 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5b461b0_718a_4065_bf1d_db2860d2af04.slice/crio-49d17c719e0af8a75be6aa7b12ad883a958929d103441b65c7ef0dc0bd8fbfd1 WatchSource:0}: Error finding container 49d17c719e0af8a75be6aa7b12ad883a958929d103441b65c7ef0dc0bd8fbfd1: Status 404 returned error can't find the container with id 49d17c719e0af8a75be6aa7b12ad883a958929d103441b65c7ef0dc0bd8fbfd1 Jan 30 17:09:33 crc kubenswrapper[4875]: I0130 17:09:33.523760 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kccg6"] Jan 30 17:09:33 crc kubenswrapper[4875]: I0130 17:09:33.523996 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kccg6" podUID="e4265bc9-c7ff-478b-a97f-0c6227b26173" containerName="registry-server" containerID="cri-o://cf92cab152a0b7db67ca1ed38dac38383a9db07dfa3646f38e0b0055516f7848" gracePeriod=2 Jan 30 17:09:33 crc kubenswrapper[4875]: I0130 17:09:33.563421 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" event={"ID":"f5b461b0-718a-4065-bf1d-db2860d2af04","Type":"ContainerStarted","Data":"af47ea5e32d98835b979c75bf0b5fb96e7341eae1165e525d27e176367256f96"} Jan 30 17:09:33 crc kubenswrapper[4875]: I0130 17:09:33.563465 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" event={"ID":"f5b461b0-718a-4065-bf1d-db2860d2af04","Type":"ContainerStarted","Data":"49d17c719e0af8a75be6aa7b12ad883a958929d103441b65c7ef0dc0bd8fbfd1"} Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.370539 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.407868 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4265bc9-c7ff-478b-a97f-0c6227b26173-catalog-content\") pod \"e4265bc9-c7ff-478b-a97f-0c6227b26173\" (UID: \"e4265bc9-c7ff-478b-a97f-0c6227b26173\") " Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.407936 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72t6l\" (UniqueName: \"kubernetes.io/projected/e4265bc9-c7ff-478b-a97f-0c6227b26173-kube-api-access-72t6l\") pod \"e4265bc9-c7ff-478b-a97f-0c6227b26173\" (UID: \"e4265bc9-c7ff-478b-a97f-0c6227b26173\") " Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.408017 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4265bc9-c7ff-478b-a97f-0c6227b26173-utilities\") pod \"e4265bc9-c7ff-478b-a97f-0c6227b26173\" (UID: \"e4265bc9-c7ff-478b-a97f-0c6227b26173\") " Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.409678 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4265bc9-c7ff-478b-a97f-0c6227b26173-utilities" (OuterVolumeSpecName: "utilities") pod "e4265bc9-c7ff-478b-a97f-0c6227b26173" (UID: "e4265bc9-c7ff-478b-a97f-0c6227b26173"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.415686 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4265bc9-c7ff-478b-a97f-0c6227b26173-kube-api-access-72t6l" (OuterVolumeSpecName: "kube-api-access-72t6l") pod "e4265bc9-c7ff-478b-a97f-0c6227b26173" (UID: "e4265bc9-c7ff-478b-a97f-0c6227b26173"). InnerVolumeSpecName "kube-api-access-72t6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.457398 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4265bc9-c7ff-478b-a97f-0c6227b26173-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4265bc9-c7ff-478b-a97f-0c6227b26173" (UID: "e4265bc9-c7ff-478b-a97f-0c6227b26173"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.509831 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4265bc9-c7ff-478b-a97f-0c6227b26173-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.509859 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4265bc9-c7ff-478b-a97f-0c6227b26173-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.509874 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72t6l\" (UniqueName: \"kubernetes.io/projected/e4265bc9-c7ff-478b-a97f-0c6227b26173-kube-api-access-72t6l\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.573532 4875 generic.go:334] "Generic (PLEG): container finished" podID="f5b461b0-718a-4065-bf1d-db2860d2af04" containerID="af47ea5e32d98835b979c75bf0b5fb96e7341eae1165e525d27e176367256f96" exitCode=0 Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.573649 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" event={"ID":"f5b461b0-718a-4065-bf1d-db2860d2af04","Type":"ContainerDied","Data":"af47ea5e32d98835b979c75bf0b5fb96e7341eae1165e525d27e176367256f96"} Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.578019 4875 generic.go:334] "Generic (PLEG): container finished" podID="e4265bc9-c7ff-478b-a97f-0c6227b26173" containerID="cf92cab152a0b7db67ca1ed38dac38383a9db07dfa3646f38e0b0055516f7848" exitCode=0 Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.578097 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kccg6" event={"ID":"e4265bc9-c7ff-478b-a97f-0c6227b26173","Type":"ContainerDied","Data":"cf92cab152a0b7db67ca1ed38dac38383a9db07dfa3646f38e0b0055516f7848"} Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.578126 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kccg6" event={"ID":"e4265bc9-c7ff-478b-a97f-0c6227b26173","Type":"ContainerDied","Data":"793a031d8c0ab8757078995650ca0c90a9057c5249a2991becbc30bfa428b82c"} Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.578143 4875 scope.go:117] "RemoveContainer" containerID="cf92cab152a0b7db67ca1ed38dac38383a9db07dfa3646f38e0b0055516f7848" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.578282 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kccg6" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.597764 4875 scope.go:117] "RemoveContainer" containerID="a81e326b5e0162e1ee15311267a723167a6a7f4e6e1c205e5ced33e9ae363c84" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.613129 4875 scope.go:117] "RemoveContainer" containerID="d3455922feb76d66f9823a96627429ef61258ccc7bdea561471532fe321ff60c" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.635487 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kccg6"] Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.636232 4875 scope.go:117] "RemoveContainer" containerID="cf92cab152a0b7db67ca1ed38dac38383a9db07dfa3646f38e0b0055516f7848" Jan 30 17:09:34 crc kubenswrapper[4875]: E0130 17:09:34.636797 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf92cab152a0b7db67ca1ed38dac38383a9db07dfa3646f38e0b0055516f7848\": container with ID starting with cf92cab152a0b7db67ca1ed38dac38383a9db07dfa3646f38e0b0055516f7848 not found: ID does not exist" containerID="cf92cab152a0b7db67ca1ed38dac38383a9db07dfa3646f38e0b0055516f7848" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.636827 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf92cab152a0b7db67ca1ed38dac38383a9db07dfa3646f38e0b0055516f7848"} err="failed to get container status \"cf92cab152a0b7db67ca1ed38dac38383a9db07dfa3646f38e0b0055516f7848\": rpc error: code = NotFound desc = could not find container \"cf92cab152a0b7db67ca1ed38dac38383a9db07dfa3646f38e0b0055516f7848\": container with ID starting with cf92cab152a0b7db67ca1ed38dac38383a9db07dfa3646f38e0b0055516f7848 not found: ID does not exist" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.636871 4875 scope.go:117] "RemoveContainer" containerID="a81e326b5e0162e1ee15311267a723167a6a7f4e6e1c205e5ced33e9ae363c84" Jan 30 17:09:34 crc kubenswrapper[4875]: E0130 17:09:34.637185 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a81e326b5e0162e1ee15311267a723167a6a7f4e6e1c205e5ced33e9ae363c84\": container with ID starting with a81e326b5e0162e1ee15311267a723167a6a7f4e6e1c205e5ced33e9ae363c84 not found: ID does not exist" containerID="a81e326b5e0162e1ee15311267a723167a6a7f4e6e1c205e5ced33e9ae363c84" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.637280 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a81e326b5e0162e1ee15311267a723167a6a7f4e6e1c205e5ced33e9ae363c84"} err="failed to get container status \"a81e326b5e0162e1ee15311267a723167a6a7f4e6e1c205e5ced33e9ae363c84\": rpc error: code = NotFound desc = could not find container \"a81e326b5e0162e1ee15311267a723167a6a7f4e6e1c205e5ced33e9ae363c84\": container with ID starting with a81e326b5e0162e1ee15311267a723167a6a7f4e6e1c205e5ced33e9ae363c84 not found: ID does not exist" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.637358 4875 scope.go:117] "RemoveContainer" containerID="d3455922feb76d66f9823a96627429ef61258ccc7bdea561471532fe321ff60c" Jan 30 17:09:34 crc kubenswrapper[4875]: E0130 17:09:34.637705 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3455922feb76d66f9823a96627429ef61258ccc7bdea561471532fe321ff60c\": container with ID starting with d3455922feb76d66f9823a96627429ef61258ccc7bdea561471532fe321ff60c not found: ID does not exist" containerID="d3455922feb76d66f9823a96627429ef61258ccc7bdea561471532fe321ff60c" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.637796 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3455922feb76d66f9823a96627429ef61258ccc7bdea561471532fe321ff60c"} err="failed to get container status \"d3455922feb76d66f9823a96627429ef61258ccc7bdea561471532fe321ff60c\": rpc error: code = NotFound desc = could not find container \"d3455922feb76d66f9823a96627429ef61258ccc7bdea561471532fe321ff60c\": container with ID starting with d3455922feb76d66f9823a96627429ef61258ccc7bdea561471532fe321ff60c not found: ID does not exist" Jan 30 17:09:34 crc kubenswrapper[4875]: I0130 17:09:34.638873 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kccg6"] Jan 30 17:09:35 crc kubenswrapper[4875]: I0130 17:09:35.586943 4875 generic.go:334] "Generic (PLEG): container finished" podID="f5b461b0-718a-4065-bf1d-db2860d2af04" containerID="dc8d13929f2bd70035d95128c83e2ce0c987a63aaa9e82f565375766734dcf44" exitCode=0 Jan 30 17:09:35 crc kubenswrapper[4875]: I0130 17:09:35.586998 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" event={"ID":"f5b461b0-718a-4065-bf1d-db2860d2af04","Type":"ContainerDied","Data":"dc8d13929f2bd70035d95128c83e2ce0c987a63aaa9e82f565375766734dcf44"} Jan 30 17:09:36 crc kubenswrapper[4875]: I0130 17:09:36.142254 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4265bc9-c7ff-478b-a97f-0c6227b26173" path="/var/lib/kubelet/pods/e4265bc9-c7ff-478b-a97f-0c6227b26173/volumes" Jan 30 17:09:36 crc kubenswrapper[4875]: I0130 17:09:36.595859 4875 generic.go:334] "Generic (PLEG): container finished" podID="f5b461b0-718a-4065-bf1d-db2860d2af04" containerID="2199fbc962463c9463d5146b96b2f786a39233c582f9493cf8e69d50f85dc94c" exitCode=0 Jan 30 17:09:36 crc kubenswrapper[4875]: I0130 17:09:36.595908 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" event={"ID":"f5b461b0-718a-4065-bf1d-db2860d2af04","Type":"ContainerDied","Data":"2199fbc962463c9463d5146b96b2f786a39233c582f9493cf8e69d50f85dc94c"} Jan 30 17:09:37 crc kubenswrapper[4875]: I0130 17:09:37.828163 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" Jan 30 17:09:37 crc kubenswrapper[4875]: I0130 17:09:37.859831 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5b461b0-718a-4065-bf1d-db2860d2af04-util\") pod \"f5b461b0-718a-4065-bf1d-db2860d2af04\" (UID: \"f5b461b0-718a-4065-bf1d-db2860d2af04\") " Jan 30 17:09:37 crc kubenswrapper[4875]: I0130 17:09:37.859911 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5b461b0-718a-4065-bf1d-db2860d2af04-bundle\") pod \"f5b461b0-718a-4065-bf1d-db2860d2af04\" (UID: \"f5b461b0-718a-4065-bf1d-db2860d2af04\") " Jan 30 17:09:37 crc kubenswrapper[4875]: I0130 17:09:37.859951 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njwnj\" (UniqueName: \"kubernetes.io/projected/f5b461b0-718a-4065-bf1d-db2860d2af04-kube-api-access-njwnj\") pod \"f5b461b0-718a-4065-bf1d-db2860d2af04\" (UID: \"f5b461b0-718a-4065-bf1d-db2860d2af04\") " Jan 30 17:09:37 crc kubenswrapper[4875]: I0130 17:09:37.860621 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5b461b0-718a-4065-bf1d-db2860d2af04-bundle" (OuterVolumeSpecName: "bundle") pod "f5b461b0-718a-4065-bf1d-db2860d2af04" (UID: "f5b461b0-718a-4065-bf1d-db2860d2af04"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:09:37 crc kubenswrapper[4875]: I0130 17:09:37.865898 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b461b0-718a-4065-bf1d-db2860d2af04-kube-api-access-njwnj" (OuterVolumeSpecName: "kube-api-access-njwnj") pod "f5b461b0-718a-4065-bf1d-db2860d2af04" (UID: "f5b461b0-718a-4065-bf1d-db2860d2af04"). InnerVolumeSpecName "kube-api-access-njwnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:09:37 crc kubenswrapper[4875]: I0130 17:09:37.875903 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5b461b0-718a-4065-bf1d-db2860d2af04-util" (OuterVolumeSpecName: "util") pod "f5b461b0-718a-4065-bf1d-db2860d2af04" (UID: "f5b461b0-718a-4065-bf1d-db2860d2af04"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:09:37 crc kubenswrapper[4875]: I0130 17:09:37.961570 4875 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5b461b0-718a-4065-bf1d-db2860d2af04-util\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:37 crc kubenswrapper[4875]: I0130 17:09:37.961629 4875 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5b461b0-718a-4065-bf1d-db2860d2af04-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:37 crc kubenswrapper[4875]: I0130 17:09:37.961646 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njwnj\" (UniqueName: \"kubernetes.io/projected/f5b461b0-718a-4065-bf1d-db2860d2af04-kube-api-access-njwnj\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:38 crc kubenswrapper[4875]: I0130 17:09:38.623381 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" event={"ID":"f5b461b0-718a-4065-bf1d-db2860d2af04","Type":"ContainerDied","Data":"49d17c719e0af8a75be6aa7b12ad883a958929d103441b65c7ef0dc0bd8fbfd1"} Jan 30 17:09:38 crc kubenswrapper[4875]: I0130 17:09:38.623786 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49d17c719e0af8a75be6aa7b12ad883a958929d103441b65c7ef0dc0bd8fbfd1" Jan 30 17:09:38 crc kubenswrapper[4875]: I0130 17:09:38.623530 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.122468 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r"] Jan 30 17:09:44 crc kubenswrapper[4875]: E0130 17:09:44.123074 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b461b0-718a-4065-bf1d-db2860d2af04" containerName="util" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.123090 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b461b0-718a-4065-bf1d-db2860d2af04" containerName="util" Jan 30 17:09:44 crc kubenswrapper[4875]: E0130 17:09:44.123108 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b461b0-718a-4065-bf1d-db2860d2af04" containerName="pull" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.123115 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b461b0-718a-4065-bf1d-db2860d2af04" containerName="pull" Jan 30 17:09:44 crc kubenswrapper[4875]: E0130 17:09:44.123134 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4265bc9-c7ff-478b-a97f-0c6227b26173" containerName="registry-server" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.123143 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4265bc9-c7ff-478b-a97f-0c6227b26173" containerName="registry-server" Jan 30 17:09:44 crc kubenswrapper[4875]: E0130 17:09:44.123153 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4265bc9-c7ff-478b-a97f-0c6227b26173" containerName="extract-content" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.123161 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4265bc9-c7ff-478b-a97f-0c6227b26173" containerName="extract-content" Jan 30 17:09:44 crc kubenswrapper[4875]: E0130 17:09:44.123179 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4265bc9-c7ff-478b-a97f-0c6227b26173" containerName="extract-utilities" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.123187 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4265bc9-c7ff-478b-a97f-0c6227b26173" containerName="extract-utilities" Jan 30 17:09:44 crc kubenswrapper[4875]: E0130 17:09:44.123197 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b461b0-718a-4065-bf1d-db2860d2af04" containerName="extract" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.123205 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b461b0-718a-4065-bf1d-db2860d2af04" containerName="extract" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.123331 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b461b0-718a-4065-bf1d-db2860d2af04" containerName="extract" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.123357 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4265bc9-c7ff-478b-a97f-0c6227b26173" containerName="registry-server" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.123881 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.129035 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-8995d" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.146909 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r"] Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.164331 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6lhq\" (UniqueName: \"kubernetes.io/projected/b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d-kube-api-access-d6lhq\") pod \"openstack-operator-controller-init-64d87976dc-xvd5r\" (UID: \"b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d\") " pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.265470 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6lhq\" (UniqueName: \"kubernetes.io/projected/b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d-kube-api-access-d6lhq\") pod \"openstack-operator-controller-init-64d87976dc-xvd5r\" (UID: \"b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d\") " pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.291056 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6lhq\" (UniqueName: \"kubernetes.io/projected/b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d-kube-api-access-d6lhq\") pod \"openstack-operator-controller-init-64d87976dc-xvd5r\" (UID: \"b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d\") " pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.444709 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" Jan 30 17:09:44 crc kubenswrapper[4875]: I0130 17:09:44.661814 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r"] Jan 30 17:09:45 crc kubenswrapper[4875]: I0130 17:09:45.666081 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" event={"ID":"b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d","Type":"ContainerStarted","Data":"f3321421085f46cebae966c3e4eda008045c82586a6f1eb79f7f1389009c70e8"} Jan 30 17:09:47 crc kubenswrapper[4875]: I0130 17:09:47.239454 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zmrqn"] Jan 30 17:09:47 crc kubenswrapper[4875]: I0130 17:09:47.241060 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:47 crc kubenswrapper[4875]: I0130 17:09:47.249463 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zmrqn"] Jan 30 17:09:47 crc kubenswrapper[4875]: I0130 17:09:47.404763 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcgt7\" (UniqueName: \"kubernetes.io/projected/fb88d068-b580-4a67-b8fd-154340307c58-kube-api-access-bcgt7\") pod \"redhat-operators-zmrqn\" (UID: \"fb88d068-b580-4a67-b8fd-154340307c58\") " pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:47 crc kubenswrapper[4875]: I0130 17:09:47.404814 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb88d068-b580-4a67-b8fd-154340307c58-utilities\") pod \"redhat-operators-zmrqn\" (UID: \"fb88d068-b580-4a67-b8fd-154340307c58\") " pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:47 crc kubenswrapper[4875]: I0130 17:09:47.404840 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb88d068-b580-4a67-b8fd-154340307c58-catalog-content\") pod \"redhat-operators-zmrqn\" (UID: \"fb88d068-b580-4a67-b8fd-154340307c58\") " pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:47 crc kubenswrapper[4875]: I0130 17:09:47.506578 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcgt7\" (UniqueName: \"kubernetes.io/projected/fb88d068-b580-4a67-b8fd-154340307c58-kube-api-access-bcgt7\") pod \"redhat-operators-zmrqn\" (UID: \"fb88d068-b580-4a67-b8fd-154340307c58\") " pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:47 crc kubenswrapper[4875]: I0130 17:09:47.506654 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb88d068-b580-4a67-b8fd-154340307c58-utilities\") pod \"redhat-operators-zmrqn\" (UID: \"fb88d068-b580-4a67-b8fd-154340307c58\") " pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:47 crc kubenswrapper[4875]: I0130 17:09:47.506684 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb88d068-b580-4a67-b8fd-154340307c58-catalog-content\") pod \"redhat-operators-zmrqn\" (UID: \"fb88d068-b580-4a67-b8fd-154340307c58\") " pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:47 crc kubenswrapper[4875]: I0130 17:09:47.507154 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb88d068-b580-4a67-b8fd-154340307c58-utilities\") pod \"redhat-operators-zmrqn\" (UID: \"fb88d068-b580-4a67-b8fd-154340307c58\") " pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:47 crc kubenswrapper[4875]: I0130 17:09:47.507271 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb88d068-b580-4a67-b8fd-154340307c58-catalog-content\") pod \"redhat-operators-zmrqn\" (UID: \"fb88d068-b580-4a67-b8fd-154340307c58\") " pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:47 crc kubenswrapper[4875]: I0130 17:09:47.547196 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcgt7\" (UniqueName: \"kubernetes.io/projected/fb88d068-b580-4a67-b8fd-154340307c58-kube-api-access-bcgt7\") pod \"redhat-operators-zmrqn\" (UID: \"fb88d068-b580-4a67-b8fd-154340307c58\") " pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:47 crc kubenswrapper[4875]: I0130 17:09:47.569745 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:48 crc kubenswrapper[4875]: I0130 17:09:48.770798 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zmrqn"] Jan 30 17:09:48 crc kubenswrapper[4875]: W0130 17:09:48.774397 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb88d068_b580_4a67_b8fd_154340307c58.slice/crio-eca0c0c0c9fc42f95eb1e6b48d19104d82f30bffc2ca31467b6f156a25118aa2 WatchSource:0}: Error finding container eca0c0c0c9fc42f95eb1e6b48d19104d82f30bffc2ca31467b6f156a25118aa2: Status 404 returned error can't find the container with id eca0c0c0c9fc42f95eb1e6b48d19104d82f30bffc2ca31467b6f156a25118aa2 Jan 30 17:09:49 crc kubenswrapper[4875]: I0130 17:09:49.696314 4875 generic.go:334] "Generic (PLEG): container finished" podID="fb88d068-b580-4a67-b8fd-154340307c58" containerID="3344ad38f922d00917f96a2c6bed3fec8a06c4616d139424b4ad5830ca4c6b4c" exitCode=0 Jan 30 17:09:49 crc kubenswrapper[4875]: I0130 17:09:49.696390 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmrqn" event={"ID":"fb88d068-b580-4a67-b8fd-154340307c58","Type":"ContainerDied","Data":"3344ad38f922d00917f96a2c6bed3fec8a06c4616d139424b4ad5830ca4c6b4c"} Jan 30 17:09:49 crc kubenswrapper[4875]: I0130 17:09:49.696773 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmrqn" event={"ID":"fb88d068-b580-4a67-b8fd-154340307c58","Type":"ContainerStarted","Data":"eca0c0c0c9fc42f95eb1e6b48d19104d82f30bffc2ca31467b6f156a25118aa2"} Jan 30 17:09:49 crc kubenswrapper[4875]: I0130 17:09:49.700335 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" event={"ID":"b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d","Type":"ContainerStarted","Data":"6c36d432ebf9367d608513184b4805d9a3e1655fd4d5ee543724ccba99e8b017"} Jan 30 17:09:49 crc kubenswrapper[4875]: I0130 17:09:49.700625 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" Jan 30 17:09:49 crc kubenswrapper[4875]: I0130 17:09:49.742328 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" podStartSLOduration=1.758557388 podStartE2EDuration="5.742309809s" podCreationTimestamp="2026-01-30 17:09:44 +0000 UTC" firstStartedPulling="2026-01-30 17:09:44.675107154 +0000 UTC m=+795.222470537" lastFinishedPulling="2026-01-30 17:09:48.658859575 +0000 UTC m=+799.206222958" observedRunningTime="2026-01-30 17:09:49.738475549 +0000 UTC m=+800.285838952" watchObservedRunningTime="2026-01-30 17:09:49.742309809 +0000 UTC m=+800.289673192" Jan 30 17:09:50 crc kubenswrapper[4875]: I0130 17:09:50.710481 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmrqn" event={"ID":"fb88d068-b580-4a67-b8fd-154340307c58","Type":"ContainerStarted","Data":"1e7e13e933a10235adb8fa09016bf96b1210a776011b7ca4291a56d50d307f7b"} Jan 30 17:09:51 crc kubenswrapper[4875]: I0130 17:09:51.717521 4875 generic.go:334] "Generic (PLEG): container finished" podID="fb88d068-b580-4a67-b8fd-154340307c58" containerID="1e7e13e933a10235adb8fa09016bf96b1210a776011b7ca4291a56d50d307f7b" exitCode=0 Jan 30 17:09:51 crc kubenswrapper[4875]: I0130 17:09:51.717564 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmrqn" event={"ID":"fb88d068-b580-4a67-b8fd-154340307c58","Type":"ContainerDied","Data":"1e7e13e933a10235adb8fa09016bf96b1210a776011b7ca4291a56d50d307f7b"} Jan 30 17:09:52 crc kubenswrapper[4875]: I0130 17:09:52.724457 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmrqn" event={"ID":"fb88d068-b580-4a67-b8fd-154340307c58","Type":"ContainerStarted","Data":"a0b0c5c34f33f32516611a5167b8c77a59e89892d3fdef7757b17dc4ec637490"} Jan 30 17:09:52 crc kubenswrapper[4875]: I0130 17:09:52.743803 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zmrqn" podStartSLOduration=3.205981554 podStartE2EDuration="5.743784344s" podCreationTimestamp="2026-01-30 17:09:47 +0000 UTC" firstStartedPulling="2026-01-30 17:09:49.698665826 +0000 UTC m=+800.246029209" lastFinishedPulling="2026-01-30 17:09:52.236468616 +0000 UTC m=+802.783831999" observedRunningTime="2026-01-30 17:09:52.740015757 +0000 UTC m=+803.287379140" watchObservedRunningTime="2026-01-30 17:09:52.743784344 +0000 UTC m=+803.291147717" Jan 30 17:09:54 crc kubenswrapper[4875]: I0130 17:09:54.446722 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" Jan 30 17:09:57 crc kubenswrapper[4875]: I0130 17:09:57.570651 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:57 crc kubenswrapper[4875]: I0130 17:09:57.571169 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:57 crc kubenswrapper[4875]: I0130 17:09:57.627500 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:57 crc kubenswrapper[4875]: I0130 17:09:57.806495 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:09:57 crc kubenswrapper[4875]: I0130 17:09:57.862440 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zmrqn"] Jan 30 17:09:59 crc kubenswrapper[4875]: I0130 17:09:59.762413 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zmrqn" podUID="fb88d068-b580-4a67-b8fd-154340307c58" containerName="registry-server" containerID="cri-o://a0b0c5c34f33f32516611a5167b8c77a59e89892d3fdef7757b17dc4ec637490" gracePeriod=2 Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.205432 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.365840 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb88d068-b580-4a67-b8fd-154340307c58-utilities\") pod \"fb88d068-b580-4a67-b8fd-154340307c58\" (UID: \"fb88d068-b580-4a67-b8fd-154340307c58\") " Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.365925 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcgt7\" (UniqueName: \"kubernetes.io/projected/fb88d068-b580-4a67-b8fd-154340307c58-kube-api-access-bcgt7\") pod \"fb88d068-b580-4a67-b8fd-154340307c58\" (UID: \"fb88d068-b580-4a67-b8fd-154340307c58\") " Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.366077 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb88d068-b580-4a67-b8fd-154340307c58-catalog-content\") pod \"fb88d068-b580-4a67-b8fd-154340307c58\" (UID: \"fb88d068-b580-4a67-b8fd-154340307c58\") " Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.366800 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb88d068-b580-4a67-b8fd-154340307c58-utilities" (OuterVolumeSpecName: "utilities") pod "fb88d068-b580-4a67-b8fd-154340307c58" (UID: "fb88d068-b580-4a67-b8fd-154340307c58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.379851 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb88d068-b580-4a67-b8fd-154340307c58-kube-api-access-bcgt7" (OuterVolumeSpecName: "kube-api-access-bcgt7") pod "fb88d068-b580-4a67-b8fd-154340307c58" (UID: "fb88d068-b580-4a67-b8fd-154340307c58"). InnerVolumeSpecName "kube-api-access-bcgt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.467205 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb88d068-b580-4a67-b8fd-154340307c58-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.467240 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcgt7\" (UniqueName: \"kubernetes.io/projected/fb88d068-b580-4a67-b8fd-154340307c58-kube-api-access-bcgt7\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.497970 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb88d068-b580-4a67-b8fd-154340307c58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb88d068-b580-4a67-b8fd-154340307c58" (UID: "fb88d068-b580-4a67-b8fd-154340307c58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.568998 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb88d068-b580-4a67-b8fd-154340307c58-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.771217 4875 generic.go:334] "Generic (PLEG): container finished" podID="fb88d068-b580-4a67-b8fd-154340307c58" containerID="a0b0c5c34f33f32516611a5167b8c77a59e89892d3fdef7757b17dc4ec637490" exitCode=0 Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.771278 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmrqn" event={"ID":"fb88d068-b580-4a67-b8fd-154340307c58","Type":"ContainerDied","Data":"a0b0c5c34f33f32516611a5167b8c77a59e89892d3fdef7757b17dc4ec637490"} Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.771312 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zmrqn" event={"ID":"fb88d068-b580-4a67-b8fd-154340307c58","Type":"ContainerDied","Data":"eca0c0c0c9fc42f95eb1e6b48d19104d82f30bffc2ca31467b6f156a25118aa2"} Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.771331 4875 scope.go:117] "RemoveContainer" containerID="a0b0c5c34f33f32516611a5167b8c77a59e89892d3fdef7757b17dc4ec637490" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.771332 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zmrqn" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.813823 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zmrqn"] Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.814424 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zmrqn"] Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.815673 4875 scope.go:117] "RemoveContainer" containerID="1e7e13e933a10235adb8fa09016bf96b1210a776011b7ca4291a56d50d307f7b" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.829987 4875 scope.go:117] "RemoveContainer" containerID="3344ad38f922d00917f96a2c6bed3fec8a06c4616d139424b4ad5830ca4c6b4c" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.850349 4875 scope.go:117] "RemoveContainer" containerID="a0b0c5c34f33f32516611a5167b8c77a59e89892d3fdef7757b17dc4ec637490" Jan 30 17:10:00 crc kubenswrapper[4875]: E0130 17:10:00.850864 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0b0c5c34f33f32516611a5167b8c77a59e89892d3fdef7757b17dc4ec637490\": container with ID starting with a0b0c5c34f33f32516611a5167b8c77a59e89892d3fdef7757b17dc4ec637490 not found: ID does not exist" containerID="a0b0c5c34f33f32516611a5167b8c77a59e89892d3fdef7757b17dc4ec637490" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.850902 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0b0c5c34f33f32516611a5167b8c77a59e89892d3fdef7757b17dc4ec637490"} err="failed to get container status \"a0b0c5c34f33f32516611a5167b8c77a59e89892d3fdef7757b17dc4ec637490\": rpc error: code = NotFound desc = could not find container \"a0b0c5c34f33f32516611a5167b8c77a59e89892d3fdef7757b17dc4ec637490\": container with ID starting with a0b0c5c34f33f32516611a5167b8c77a59e89892d3fdef7757b17dc4ec637490 not found: ID does not exist" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.850930 4875 scope.go:117] "RemoveContainer" containerID="1e7e13e933a10235adb8fa09016bf96b1210a776011b7ca4291a56d50d307f7b" Jan 30 17:10:00 crc kubenswrapper[4875]: E0130 17:10:00.852967 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7e13e933a10235adb8fa09016bf96b1210a776011b7ca4291a56d50d307f7b\": container with ID starting with 1e7e13e933a10235adb8fa09016bf96b1210a776011b7ca4291a56d50d307f7b not found: ID does not exist" containerID="1e7e13e933a10235adb8fa09016bf96b1210a776011b7ca4291a56d50d307f7b" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.853000 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7e13e933a10235adb8fa09016bf96b1210a776011b7ca4291a56d50d307f7b"} err="failed to get container status \"1e7e13e933a10235adb8fa09016bf96b1210a776011b7ca4291a56d50d307f7b\": rpc error: code = NotFound desc = could not find container \"1e7e13e933a10235adb8fa09016bf96b1210a776011b7ca4291a56d50d307f7b\": container with ID starting with 1e7e13e933a10235adb8fa09016bf96b1210a776011b7ca4291a56d50d307f7b not found: ID does not exist" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.853021 4875 scope.go:117] "RemoveContainer" containerID="3344ad38f922d00917f96a2c6bed3fec8a06c4616d139424b4ad5830ca4c6b4c" Jan 30 17:10:00 crc kubenswrapper[4875]: E0130 17:10:00.853284 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3344ad38f922d00917f96a2c6bed3fec8a06c4616d139424b4ad5830ca4c6b4c\": container with ID starting with 3344ad38f922d00917f96a2c6bed3fec8a06c4616d139424b4ad5830ca4c6b4c not found: ID does not exist" containerID="3344ad38f922d00917f96a2c6bed3fec8a06c4616d139424b4ad5830ca4c6b4c" Jan 30 17:10:00 crc kubenswrapper[4875]: I0130 17:10:00.853312 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3344ad38f922d00917f96a2c6bed3fec8a06c4616d139424b4ad5830ca4c6b4c"} err="failed to get container status \"3344ad38f922d00917f96a2c6bed3fec8a06c4616d139424b4ad5830ca4c6b4c\": rpc error: code = NotFound desc = could not find container \"3344ad38f922d00917f96a2c6bed3fec8a06c4616d139424b4ad5830ca4c6b4c\": container with ID starting with 3344ad38f922d00917f96a2c6bed3fec8a06c4616d139424b4ad5830ca4c6b4c not found: ID does not exist" Jan 30 17:10:02 crc kubenswrapper[4875]: I0130 17:10:02.145010 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb88d068-b580-4a67-b8fd-154340307c58" path="/var/lib/kubelet/pods/fb88d068-b580-4a67-b8fd-154340307c58/volumes" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.069311 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mjlwh"] Jan 30 17:10:14 crc kubenswrapper[4875]: E0130 17:10:14.070156 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb88d068-b580-4a67-b8fd-154340307c58" containerName="extract-content" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.070173 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb88d068-b580-4a67-b8fd-154340307c58" containerName="extract-content" Jan 30 17:10:14 crc kubenswrapper[4875]: E0130 17:10:14.070184 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb88d068-b580-4a67-b8fd-154340307c58" containerName="registry-server" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.070191 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb88d068-b580-4a67-b8fd-154340307c58" containerName="registry-server" Jan 30 17:10:14 crc kubenswrapper[4875]: E0130 17:10:14.070212 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb88d068-b580-4a67-b8fd-154340307c58" containerName="extract-utilities" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.070220 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb88d068-b580-4a67-b8fd-154340307c58" containerName="extract-utilities" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.070354 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb88d068-b580-4a67-b8fd-154340307c58" containerName="registry-server" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.070870 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mjlwh" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.073341 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-s6c8f" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.074488 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-dm9v4"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.075473 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-dm9v4" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.076800 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-5pvxr" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.079092 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mjlwh"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.090574 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-gbhbx"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.091318 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gbhbx" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.094058 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-qzm5g" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.099865 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-znpxc"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.100970 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-znpxc" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.102615 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-s5c27" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.105894 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-dm9v4"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.111629 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-bvnzf"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.112415 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bvnzf" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.114213 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-lf4zh" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.122924 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-gbhbx"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.127457 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-bvnzf"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.131070 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-znpxc"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.134627 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-fpcz4"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.135634 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fpcz4" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.137779 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-7c7ms" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.155017 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwt45\" (UniqueName: \"kubernetes.io/projected/4d112d50-a873-440f-b366-332c135cd9cf-kube-api-access-cwt45\") pod \"cinder-operator-controller-manager-8d874c8fc-dm9v4\" (UID: \"4d112d50-a873-440f-b366-332c135cd9cf\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-dm9v4" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.155087 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjqk\" (UniqueName: \"kubernetes.io/projected/be56ef14-c793-4e0a-82bb-4e29b4182e22-kube-api-access-4fjqk\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-mjlwh\" (UID: \"be56ef14-c793-4e0a-82bb-4e29b4182e22\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mjlwh" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.157312 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-frg6k"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.158451 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.166567 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-l9rmf" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.166570 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.180364 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-fpcz4"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.229686 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-cpvgb"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.230785 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cpvgb" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.234833 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jncqt" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.248167 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fdmpd"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.249818 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fdmpd" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.258374 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-66jjk" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.259338 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwt45\" (UniqueName: \"kubernetes.io/projected/4d112d50-a873-440f-b366-332c135cd9cf-kube-api-access-cwt45\") pod \"cinder-operator-controller-manager-8d874c8fc-dm9v4\" (UID: \"4d112d50-a873-440f-b366-332c135cd9cf\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-dm9v4" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.259377 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbv4q\" (UniqueName: \"kubernetes.io/projected/14395019-dadc-4326-8a88-3f8746438a60-kube-api-access-bbv4q\") pod \"horizon-operator-controller-manager-5fb775575f-fpcz4\" (UID: \"14395019-dadc-4326-8a88-3f8746438a60\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fpcz4" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.259398 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjqk\" (UniqueName: \"kubernetes.io/projected/be56ef14-c793-4e0a-82bb-4e29b4182e22-kube-api-access-4fjqk\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-mjlwh\" (UID: \"be56ef14-c793-4e0a-82bb-4e29b4182e22\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mjlwh" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.259416 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert\") pod \"infra-operator-controller-manager-79955696d6-frg6k\" (UID: \"9a2f99f7-889a-4847-88f0-3241c2fa3353\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.259443 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvdlm\" (UniqueName: \"kubernetes.io/projected/d6508139-1b0b-45c7-b307-901c0903370f-kube-api-access-kvdlm\") pod \"heat-operator-controller-manager-69d6db494d-bvnzf\" (UID: \"d6508139-1b0b-45c7-b307-901c0903370f\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bvnzf" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.259489 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdl6m\" (UniqueName: \"kubernetes.io/projected/daa61e94-524b-445a-8086-63a4a3db6764-kube-api-access-vdl6m\") pod \"designate-operator-controller-manager-6d9697b7f4-znpxc\" (UID: \"daa61e94-524b-445a-8086-63a4a3db6764\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-znpxc" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.259527 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5qq9\" (UniqueName: \"kubernetes.io/projected/89036e1f-6293-456d-ae24-6a52b2a102d9-kube-api-access-k5qq9\") pod \"glance-operator-controller-manager-8886f4c47-gbhbx\" (UID: \"89036e1f-6293-456d-ae24-6a52b2a102d9\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gbhbx" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.259545 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqs5s\" (UniqueName: \"kubernetes.io/projected/9a2f99f7-889a-4847-88f0-3241c2fa3353-kube-api-access-sqs5s\") pod \"infra-operator-controller-manager-79955696d6-frg6k\" (UID: \"9a2f99f7-889a-4847-88f0-3241c2fa3353\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.265661 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-cpvgb"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.285330 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwt45\" (UniqueName: \"kubernetes.io/projected/4d112d50-a873-440f-b366-332c135cd9cf-kube-api-access-cwt45\") pod \"cinder-operator-controller-manager-8d874c8fc-dm9v4\" (UID: \"4d112d50-a873-440f-b366-332c135cd9cf\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-dm9v4" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.288343 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fjqk\" (UniqueName: \"kubernetes.io/projected/be56ef14-c793-4e0a-82bb-4e29b4182e22-kube-api-access-4fjqk\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-mjlwh\" (UID: \"be56ef14-c793-4e0a-82bb-4e29b4182e22\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mjlwh" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.300645 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-frg6k"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.317074 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fdmpd"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.327080 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-nzlnv"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.328005 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-nzlnv" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.331282 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-4f9gd" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.345930 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-d74js"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.346762 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-d74js" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.358998 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rg89w" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.360293 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert\") pod \"infra-operator-controller-manager-79955696d6-frg6k\" (UID: \"9a2f99f7-889a-4847-88f0-3241c2fa3353\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.360321 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbv4q\" (UniqueName: \"kubernetes.io/projected/14395019-dadc-4326-8a88-3f8746438a60-kube-api-access-bbv4q\") pod \"horizon-operator-controller-manager-5fb775575f-fpcz4\" (UID: \"14395019-dadc-4326-8a88-3f8746438a60\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fpcz4" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.360346 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvdlm\" (UniqueName: \"kubernetes.io/projected/d6508139-1b0b-45c7-b307-901c0903370f-kube-api-access-kvdlm\") pod \"heat-operator-controller-manager-69d6db494d-bvnzf\" (UID: \"d6508139-1b0b-45c7-b307-901c0903370f\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bvnzf" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.360384 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz6tq\" (UniqueName: \"kubernetes.io/projected/1a65b1f7-9d89-4a8b-9af9-811495df5c5f-kube-api-access-zz6tq\") pod \"keystone-operator-controller-manager-84f48565d4-cpvgb\" (UID: \"1a65b1f7-9d89-4a8b-9af9-811495df5c5f\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cpvgb" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.360406 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdl6m\" (UniqueName: \"kubernetes.io/projected/daa61e94-524b-445a-8086-63a4a3db6764-kube-api-access-vdl6m\") pod \"designate-operator-controller-manager-6d9697b7f4-znpxc\" (UID: \"daa61e94-524b-445a-8086-63a4a3db6764\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-znpxc" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.360444 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfx67\" (UniqueName: \"kubernetes.io/projected/792a5bfa-13bb-4e86-ab45-09dd184fcab3-kube-api-access-kfx67\") pod \"ironic-operator-controller-manager-5f4b8bd54d-fdmpd\" (UID: \"792a5bfa-13bb-4e86-ab45-09dd184fcab3\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fdmpd" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.360473 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5qq9\" (UniqueName: \"kubernetes.io/projected/89036e1f-6293-456d-ae24-6a52b2a102d9-kube-api-access-k5qq9\") pod \"glance-operator-controller-manager-8886f4c47-gbhbx\" (UID: \"89036e1f-6293-456d-ae24-6a52b2a102d9\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gbhbx" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.360492 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqs5s\" (UniqueName: \"kubernetes.io/projected/9a2f99f7-889a-4847-88f0-3241c2fa3353-kube-api-access-sqs5s\") pod \"infra-operator-controller-manager-79955696d6-frg6k\" (UID: \"9a2f99f7-889a-4847-88f0-3241c2fa3353\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:14 crc kubenswrapper[4875]: E0130 17:10:14.360941 4875 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 17:10:14 crc kubenswrapper[4875]: E0130 17:10:14.363478 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert podName:9a2f99f7-889a-4847-88f0-3241c2fa3353 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:14.863448834 +0000 UTC m=+825.410812217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert") pod "infra-operator-controller-manager-79955696d6-frg6k" (UID: "9a2f99f7-889a-4847-88f0-3241c2fa3353") : secret "infra-operator-webhook-server-cert" not found Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.395459 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvdlm\" (UniqueName: \"kubernetes.io/projected/d6508139-1b0b-45c7-b307-901c0903370f-kube-api-access-kvdlm\") pod \"heat-operator-controller-manager-69d6db494d-bvnzf\" (UID: \"d6508139-1b0b-45c7-b307-901c0903370f\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bvnzf" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.396698 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbv4q\" (UniqueName: \"kubernetes.io/projected/14395019-dadc-4326-8a88-3f8746438a60-kube-api-access-bbv4q\") pod \"horizon-operator-controller-manager-5fb775575f-fpcz4\" (UID: \"14395019-dadc-4326-8a88-3f8746438a60\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fpcz4" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.396903 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5qq9\" (UniqueName: \"kubernetes.io/projected/89036e1f-6293-456d-ae24-6a52b2a102d9-kube-api-access-k5qq9\") pod \"glance-operator-controller-manager-8886f4c47-gbhbx\" (UID: \"89036e1f-6293-456d-ae24-6a52b2a102d9\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gbhbx" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.401340 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-nzlnv"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.401394 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-d74js"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.401408 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.405819 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdl6m\" (UniqueName: \"kubernetes.io/projected/daa61e94-524b-445a-8086-63a4a3db6764-kube-api-access-vdl6m\") pod \"designate-operator-controller-manager-6d9697b7f4-znpxc\" (UID: \"daa61e94-524b-445a-8086-63a4a3db6764\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-znpxc" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.409357 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.412165 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqs5s\" (UniqueName: \"kubernetes.io/projected/9a2f99f7-889a-4847-88f0-3241c2fa3353-kube-api-access-sqs5s\") pod \"infra-operator-controller-manager-79955696d6-frg6k\" (UID: \"9a2f99f7-889a-4847-88f0-3241c2fa3353\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.420448 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-922hl" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.420726 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5c487c8746-9msld"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.421789 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.423517 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rr67l" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.428040 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.435907 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5c487c8746-9msld"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.455917 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-h9cpk"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.456785 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h9cpk" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.458319 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-s8gp9" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.461206 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz6tq\" (UniqueName: \"kubernetes.io/projected/1a65b1f7-9d89-4a8b-9af9-811495df5c5f-kube-api-access-zz6tq\") pod \"keystone-operator-controller-manager-84f48565d4-cpvgb\" (UID: \"1a65b1f7-9d89-4a8b-9af9-811495df5c5f\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cpvgb" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.461245 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l986f\" (UniqueName: \"kubernetes.io/projected/a8c14e5e-0827-45c6-8e21-c524ad39fb11-kube-api-access-l986f\") pod \"mariadb-operator-controller-manager-67bf948998-d74js\" (UID: \"a8c14e5e-0827-45c6-8e21-c524ad39fb11\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-d74js" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.461284 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfx67\" (UniqueName: \"kubernetes.io/projected/792a5bfa-13bb-4e86-ab45-09dd184fcab3-kube-api-access-kfx67\") pod \"ironic-operator-controller-manager-5f4b8bd54d-fdmpd\" (UID: \"792a5bfa-13bb-4e86-ab45-09dd184fcab3\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fdmpd" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.461303 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w65zp\" (UniqueName: \"kubernetes.io/projected/408af5cb-dfce-44ff-9b25-5378f194196f-kube-api-access-w65zp\") pod \"manila-operator-controller-manager-7dd968899f-nzlnv\" (UID: \"408af5cb-dfce-44ff-9b25-5378f194196f\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-nzlnv" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.464298 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-h9cpk"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.470990 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.471854 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.474301 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-dxfsh" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.476437 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mjlwh" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.480828 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz6tq\" (UniqueName: \"kubernetes.io/projected/1a65b1f7-9d89-4a8b-9af9-811495df5c5f-kube-api-access-zz6tq\") pod \"keystone-operator-controller-manager-84f48565d4-cpvgb\" (UID: \"1a65b1f7-9d89-4a8b-9af9-811495df5c5f\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cpvgb" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.482233 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.486007 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfx67\" (UniqueName: \"kubernetes.io/projected/792a5bfa-13bb-4e86-ab45-09dd184fcab3-kube-api-access-kfx67\") pod \"ironic-operator-controller-manager-5f4b8bd54d-fdmpd\" (UID: \"792a5bfa-13bb-4e86-ab45-09dd184fcab3\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fdmpd" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.486743 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-dm9v4" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.489196 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.490150 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.493416 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cdz4h" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.493697 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.498154 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.499493 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.500023 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gbhbx" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.503920 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-sb9lw" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.509003 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.514063 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.520920 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-znpxc" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.524390 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-zxs9g"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.525215 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zxs9g" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.529988 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2nw52" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.530843 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-zxs9g"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.540045 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bvnzf" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.561767 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fpcz4" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.568507 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l986f\" (UniqueName: \"kubernetes.io/projected/a8c14e5e-0827-45c6-8e21-c524ad39fb11-kube-api-access-l986f\") pod \"mariadb-operator-controller-manager-67bf948998-d74js\" (UID: \"a8c14e5e-0827-45c6-8e21-c524ad39fb11\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-d74js" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.568775 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt9h5\" (UniqueName: \"kubernetes.io/projected/044cc22a-35c3-49ac-8c70-80478ce3f670-kube-api-access-zt9h5\") pod \"octavia-operator-controller-manager-6687f8d877-h9cpk\" (UID: \"044cc22a-35c3-49ac-8c70-80478ce3f670\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h9cpk" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.568970 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w65zp\" (UniqueName: \"kubernetes.io/projected/408af5cb-dfce-44ff-9b25-5378f194196f-kube-api-access-w65zp\") pod \"manila-operator-controller-manager-7dd968899f-nzlnv\" (UID: \"408af5cb-dfce-44ff-9b25-5378f194196f\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-nzlnv" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.569077 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t42jw\" (UniqueName: \"kubernetes.io/projected/972271b3-306a-4015-be23-c1320e0c296e-kube-api-access-t42jw\") pod \"neutron-operator-controller-manager-585dbc889-w75bt\" (UID: \"972271b3-306a-4015-be23-c1320e0c296e\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.569149 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j27qn\" (UniqueName: \"kubernetes.io/projected/bbef4553-54c5-4fcb-9868-49c67b9420b5-kube-api-access-j27qn\") pod \"nova-operator-controller-manager-5c487c8746-9msld\" (UID: \"bbef4553-54c5-4fcb-9868-49c67b9420b5\") " pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.569320 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2\" (UID: \"59490e66-2646-4a95-9b81-e372fbd2f921\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.571523 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snszj\" (UniqueName: \"kubernetes.io/projected/cefac6c5-5765-4646-a5c1-9832fb0170d6-kube-api-access-snszj\") pod \"ovn-operator-controller-manager-788c46999f-xnw72\" (UID: \"cefac6c5-5765-4646-a5c1-9832fb0170d6\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.573531 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sstvr\" (UniqueName: \"kubernetes.io/projected/59490e66-2646-4a95-9b81-e372fbd2f921-kube-api-access-sstvr\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2\" (UID: \"59490e66-2646-4a95-9b81-e372fbd2f921\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.574931 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.579057 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.582864 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-7rzxp" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.588866 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w65zp\" (UniqueName: \"kubernetes.io/projected/408af5cb-dfce-44ff-9b25-5378f194196f-kube-api-access-w65zp\") pod \"manila-operator-controller-manager-7dd968899f-nzlnv\" (UID: \"408af5cb-dfce-44ff-9b25-5378f194196f\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-nzlnv" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.591026 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l986f\" (UniqueName: \"kubernetes.io/projected/a8c14e5e-0827-45c6-8e21-c524ad39fb11-kube-api-access-l986f\") pod \"mariadb-operator-controller-manager-67bf948998-d74js\" (UID: \"a8c14e5e-0827-45c6-8e21-c524ad39fb11\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-d74js" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.596860 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.631334 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.632596 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cpvgb" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.633193 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.635816 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-sz6v4" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.641220 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.648687 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fdmpd" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.666281 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-nzlnv" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.675860 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t42jw\" (UniqueName: \"kubernetes.io/projected/972271b3-306a-4015-be23-c1320e0c296e-kube-api-access-t42jw\") pod \"neutron-operator-controller-manager-585dbc889-w75bt\" (UID: \"972271b3-306a-4015-be23-c1320e0c296e\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.675900 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j27qn\" (UniqueName: \"kubernetes.io/projected/bbef4553-54c5-4fcb-9868-49c67b9420b5-kube-api-access-j27qn\") pod \"nova-operator-controller-manager-5c487c8746-9msld\" (UID: \"bbef4553-54c5-4fcb-9868-49c67b9420b5\") " pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.675937 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2\" (UID: \"59490e66-2646-4a95-9b81-e372fbd2f921\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.675960 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdnl4\" (UniqueName: \"kubernetes.io/projected/d3967345-0c3d-431b-8408-3f7beaba730d-kube-api-access-hdnl4\") pod \"placement-operator-controller-manager-5b964cf4cd-8dhn6\" (UID: \"d3967345-0c3d-431b-8408-3f7beaba730d\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.675983 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx5rg\" (UniqueName: \"kubernetes.io/projected/921d8e30-00c8-43e3-b44a-4de9e4450ba2-kube-api-access-xx5rg\") pod \"telemetry-operator-controller-manager-64b5b76f97-ns9pg\" (UID: \"921d8e30-00c8-43e3-b44a-4de9e4450ba2\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.676028 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn69m\" (UniqueName: \"kubernetes.io/projected/f4df6dfd-91eb-4d61-93fd-b93e111eb127-kube-api-access-zn69m\") pod \"swift-operator-controller-manager-68fc8c869-zxs9g\" (UID: \"f4df6dfd-91eb-4d61-93fd-b93e111eb127\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zxs9g" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.676048 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snszj\" (UniqueName: \"kubernetes.io/projected/cefac6c5-5765-4646-a5c1-9832fb0170d6-kube-api-access-snszj\") pod \"ovn-operator-controller-manager-788c46999f-xnw72\" (UID: \"cefac6c5-5765-4646-a5c1-9832fb0170d6\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.676069 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sstvr\" (UniqueName: \"kubernetes.io/projected/59490e66-2646-4a95-9b81-e372fbd2f921-kube-api-access-sstvr\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2\" (UID: \"59490e66-2646-4a95-9b81-e372fbd2f921\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.676103 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt9h5\" (UniqueName: \"kubernetes.io/projected/044cc22a-35c3-49ac-8c70-80478ce3f670-kube-api-access-zt9h5\") pod \"octavia-operator-controller-manager-6687f8d877-h9cpk\" (UID: \"044cc22a-35c3-49ac-8c70-80478ce3f670\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h9cpk" Jan 30 17:10:14 crc kubenswrapper[4875]: E0130 17:10:14.676690 4875 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:10:14 crc kubenswrapper[4875]: E0130 17:10:14.676734 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert podName:59490e66-2646-4a95-9b81-e372fbd2f921 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:15.176720149 +0000 UTC m=+825.724083532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" (UID: "59490e66-2646-4a95-9b81-e372fbd2f921") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.678754 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-d74js" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.679203 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-z4fxd"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.680271 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-z4fxd" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.684735 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-z4fxd"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.684926 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6frb5" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.703934 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j27qn\" (UniqueName: \"kubernetes.io/projected/bbef4553-54c5-4fcb-9868-49c67b9420b5-kube-api-access-j27qn\") pod \"nova-operator-controller-manager-5c487c8746-9msld\" (UID: \"bbef4553-54c5-4fcb-9868-49c67b9420b5\") " pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.705317 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t42jw\" (UniqueName: \"kubernetes.io/projected/972271b3-306a-4015-be23-c1320e0c296e-kube-api-access-t42jw\") pod \"neutron-operator-controller-manager-585dbc889-w75bt\" (UID: \"972271b3-306a-4015-be23-c1320e0c296e\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.707998 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snszj\" (UniqueName: \"kubernetes.io/projected/cefac6c5-5765-4646-a5c1-9832fb0170d6-kube-api-access-snszj\") pod \"ovn-operator-controller-manager-788c46999f-xnw72\" (UID: \"cefac6c5-5765-4646-a5c1-9832fb0170d6\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.708305 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sstvr\" (UniqueName: \"kubernetes.io/projected/59490e66-2646-4a95-9b81-e372fbd2f921-kube-api-access-sstvr\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2\" (UID: \"59490e66-2646-4a95-9b81-e372fbd2f921\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.710291 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt9h5\" (UniqueName: \"kubernetes.io/projected/044cc22a-35c3-49ac-8c70-80478ce3f670-kube-api-access-zt9h5\") pod \"octavia-operator-controller-manager-6687f8d877-h9cpk\" (UID: \"044cc22a-35c3-49ac-8c70-80478ce3f670\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h9cpk" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.751565 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.764734 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.777759 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8d9h\" (UniqueName: \"kubernetes.io/projected/66127cf7-84e7-4bb6-9830-936f7e20586d-kube-api-access-t8d9h\") pod \"watcher-operator-controller-manager-564965969-z4fxd\" (UID: \"66127cf7-84e7-4bb6-9830-936f7e20586d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-z4fxd" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.777883 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdnl4\" (UniqueName: \"kubernetes.io/projected/d3967345-0c3d-431b-8408-3f7beaba730d-kube-api-access-hdnl4\") pod \"placement-operator-controller-manager-5b964cf4cd-8dhn6\" (UID: \"d3967345-0c3d-431b-8408-3f7beaba730d\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.777912 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc9wp\" (UniqueName: \"kubernetes.io/projected/128260c8-c860-43f1-acd0-b5d9ed7d3f01-kube-api-access-fc9wp\") pod \"test-operator-controller-manager-56f8bfcd9f-ld7cp\" (UID: \"128260c8-c860-43f1-acd0-b5d9ed7d3f01\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.777940 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx5rg\" (UniqueName: \"kubernetes.io/projected/921d8e30-00c8-43e3-b44a-4de9e4450ba2-kube-api-access-xx5rg\") pod \"telemetry-operator-controller-manager-64b5b76f97-ns9pg\" (UID: \"921d8e30-00c8-43e3-b44a-4de9e4450ba2\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.777993 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn69m\" (UniqueName: \"kubernetes.io/projected/f4df6dfd-91eb-4d61-93fd-b93e111eb127-kube-api-access-zn69m\") pod \"swift-operator-controller-manager-68fc8c869-zxs9g\" (UID: \"f4df6dfd-91eb-4d61-93fd-b93e111eb127\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zxs9g" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.788073 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h9cpk" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.800493 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.811720 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdnl4\" (UniqueName: \"kubernetes.io/projected/d3967345-0c3d-431b-8408-3f7beaba730d-kube-api-access-hdnl4\") pod \"placement-operator-controller-manager-5b964cf4cd-8dhn6\" (UID: \"d3967345-0c3d-431b-8408-3f7beaba730d\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.817985 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn69m\" (UniqueName: \"kubernetes.io/projected/f4df6dfd-91eb-4d61-93fd-b93e111eb127-kube-api-access-zn69m\") pod \"swift-operator-controller-manager-68fc8c869-zxs9g\" (UID: \"f4df6dfd-91eb-4d61-93fd-b93e111eb127\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zxs9g" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.823215 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.824310 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.827740 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.828149 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.828605 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.830162 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.831941 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-g5nxz" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.850623 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx5rg\" (UniqueName: \"kubernetes.io/projected/921d8e30-00c8-43e3-b44a-4de9e4450ba2-kube-api-access-xx5rg\") pod \"telemetry-operator-controller-manager-64b5b76f97-ns9pg\" (UID: \"921d8e30-00c8-43e3-b44a-4de9e4450ba2\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.866927 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zxs9g" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.881908 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc9wp\" (UniqueName: \"kubernetes.io/projected/128260c8-c860-43f1-acd0-b5d9ed7d3f01-kube-api-access-fc9wp\") pod \"test-operator-controller-manager-56f8bfcd9f-ld7cp\" (UID: \"128260c8-c860-43f1-acd0-b5d9ed7d3f01\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.881985 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert\") pod \"infra-operator-controller-manager-79955696d6-frg6k\" (UID: \"9a2f99f7-889a-4847-88f0-3241c2fa3353\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.882030 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8d9h\" (UniqueName: \"kubernetes.io/projected/66127cf7-84e7-4bb6-9830-936f7e20586d-kube-api-access-t8d9h\") pod \"watcher-operator-controller-manager-564965969-z4fxd\" (UID: \"66127cf7-84e7-4bb6-9830-936f7e20586d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-z4fxd" Jan 30 17:10:14 crc kubenswrapper[4875]: E0130 17:10:14.882619 4875 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 17:10:14 crc kubenswrapper[4875]: E0130 17:10:14.882671 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert podName:9a2f99f7-889a-4847-88f0-3241c2fa3353 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:15.882654766 +0000 UTC m=+826.430018149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert") pod "infra-operator-controller-manager-79955696d6-frg6k" (UID: "9a2f99f7-889a-4847-88f0-3241c2fa3353") : secret "infra-operator-webhook-server-cert" not found Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.906783 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.910047 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8d9h\" (UniqueName: \"kubernetes.io/projected/66127cf7-84e7-4bb6-9830-936f7e20586d-kube-api-access-t8d9h\") pod \"watcher-operator-controller-manager-564965969-z4fxd\" (UID: \"66127cf7-84e7-4bb6-9830-936f7e20586d\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-z4fxd" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.925764 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.926969 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.931517 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss"] Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.932310 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jww72" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.933205 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc9wp\" (UniqueName: \"kubernetes.io/projected/128260c8-c860-43f1-acd0-b5d9ed7d3f01-kube-api-access-fc9wp\") pod \"test-operator-controller-manager-56f8bfcd9f-ld7cp\" (UID: \"128260c8-c860-43f1-acd0-b5d9ed7d3f01\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.984356 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6scvw\" (UniqueName: \"kubernetes.io/projected/86be17ce-228e-46ba-84df-5134bdb00c99-kube-api-access-6scvw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nj2ss\" (UID: \"86be17ce-228e-46ba-84df-5134bdb00c99\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.984418 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b7c5\" (UniqueName: \"kubernetes.io/projected/662b188b-86ea-439e-a40b-6284d49e476e-kube-api-access-2b7c5\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.984441 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:14 crc kubenswrapper[4875]: I0130 17:10:14.984462 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.039155 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp" Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.078743 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-z4fxd" Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.085264 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6scvw\" (UniqueName: \"kubernetes.io/projected/86be17ce-228e-46ba-84df-5134bdb00c99-kube-api-access-6scvw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nj2ss\" (UID: \"86be17ce-228e-46ba-84df-5134bdb00c99\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss" Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.085411 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b7c5\" (UniqueName: \"kubernetes.io/projected/662b188b-86ea-439e-a40b-6284d49e476e-kube-api-access-2b7c5\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.085502 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.085599 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:15 crc kubenswrapper[4875]: E0130 17:10:15.085847 4875 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 17:10:15 crc kubenswrapper[4875]: E0130 17:10:15.086019 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs podName:662b188b-86ea-439e-a40b-6284d49e476e nodeName:}" failed. No retries permitted until 2026-01-30 17:10:15.586003136 +0000 UTC m=+826.133366519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs") pod "openstack-operator-controller-manager-6f764c8dd-9ntw2" (UID: "662b188b-86ea-439e-a40b-6284d49e476e") : secret "webhook-server-cert" not found Jan 30 17:10:15 crc kubenswrapper[4875]: E0130 17:10:15.086356 4875 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 17:10:15 crc kubenswrapper[4875]: E0130 17:10:15.086442 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs podName:662b188b-86ea-439e-a40b-6284d49e476e nodeName:}" failed. No retries permitted until 2026-01-30 17:10:15.58643306 +0000 UTC m=+826.133796443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs") pod "openstack-operator-controller-manager-6f764c8dd-9ntw2" (UID: "662b188b-86ea-439e-a40b-6284d49e476e") : secret "metrics-server-cert" not found Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.108070 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b7c5\" (UniqueName: \"kubernetes.io/projected/662b188b-86ea-439e-a40b-6284d49e476e-kube-api-access-2b7c5\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.109198 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6scvw\" (UniqueName: \"kubernetes.io/projected/86be17ce-228e-46ba-84df-5134bdb00c99-kube-api-access-6scvw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nj2ss\" (UID: \"86be17ce-228e-46ba-84df-5134bdb00c99\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss" Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.186751 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2\" (UID: \"59490e66-2646-4a95-9b81-e372fbd2f921\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:15 crc kubenswrapper[4875]: E0130 17:10:15.187507 4875 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:10:15 crc kubenswrapper[4875]: E0130 17:10:15.187552 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert podName:59490e66-2646-4a95-9b81-e372fbd2f921 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:16.187539365 +0000 UTC m=+826.734902748 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" (UID: "59490e66-2646-4a95-9b81-e372fbd2f921") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.284972 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss" Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.303376 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mjlwh"] Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.317760 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-gbhbx"] Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.597262 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.597307 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:15 crc kubenswrapper[4875]: E0130 17:10:15.597494 4875 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 17:10:15 crc kubenswrapper[4875]: E0130 17:10:15.597551 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs podName:662b188b-86ea-439e-a40b-6284d49e476e nodeName:}" failed. No retries permitted until 2026-01-30 17:10:16.597538036 +0000 UTC m=+827.144901419 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs") pod "openstack-operator-controller-manager-6f764c8dd-9ntw2" (UID: "662b188b-86ea-439e-a40b-6284d49e476e") : secret "webhook-server-cert" not found Jan 30 17:10:15 crc kubenswrapper[4875]: E0130 17:10:15.597768 4875 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 17:10:15 crc kubenswrapper[4875]: E0130 17:10:15.597859 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs podName:662b188b-86ea-439e-a40b-6284d49e476e nodeName:}" failed. No retries permitted until 2026-01-30 17:10:16.597840917 +0000 UTC m=+827.145204300 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs") pod "openstack-operator-controller-manager-6f764c8dd-9ntw2" (UID: "662b188b-86ea-439e-a40b-6284d49e476e") : secret "metrics-server-cert" not found Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.701979 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-znpxc"] Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.714532 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-dm9v4"] Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.758363 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-cpvgb"] Jan 30 17:10:15 crc kubenswrapper[4875]: W0130 17:10:15.762678 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a65b1f7_9d89_4a8b_9af9_811495df5c5f.slice/crio-9d3920acae7fe5c944652cff0f11eb93140c482925d337bab689d0c0da9e998f WatchSource:0}: Error finding container 9d3920acae7fe5c944652cff0f11eb93140c482925d337bab689d0c0da9e998f: Status 404 returned error can't find the container with id 9d3920acae7fe5c944652cff0f11eb93140c482925d337bab689d0c0da9e998f Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.777418 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-nzlnv"] Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.800667 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-d74js"] Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.811561 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-bvnzf"] Jan 30 17:10:15 crc kubenswrapper[4875]: W0130 17:10:15.816355 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod792a5bfa_13bb_4e86_ab45_09dd184fcab3.slice/crio-201202b0ed8ad8a1179f2de2e944e3cbddbcf286cddc2ab92c9b895986266dd5 WatchSource:0}: Error finding container 201202b0ed8ad8a1179f2de2e944e3cbddbcf286cddc2ab92c9b895986266dd5: Status 404 returned error can't find the container with id 201202b0ed8ad8a1179f2de2e944e3cbddbcf286cddc2ab92c9b895986266dd5 Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.829970 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fdmpd"] Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.836223 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-fpcz4"] Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.894230 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-d74js" event={"ID":"a8c14e5e-0827-45c6-8e21-c524ad39fb11","Type":"ContainerStarted","Data":"fda4ff43a74dbc7db1a1e3315f2dc743ba6563367cd8bbb8b90ddfb07ac462c1"} Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.897338 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-dm9v4" event={"ID":"4d112d50-a873-440f-b366-332c135cd9cf","Type":"ContainerStarted","Data":"dfb334c3307ff26d6b46a33b5599583cec36dbf03c0f482dfbf96d38e4d17d5a"} Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.898201 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-znpxc" event={"ID":"daa61e94-524b-445a-8086-63a4a3db6764","Type":"ContainerStarted","Data":"a033b35391ef8c30e7cb6c1b5e06f9d6b9f0e4419caf2532bf6d4378a87bfe83"} Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.899191 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bvnzf" event={"ID":"d6508139-1b0b-45c7-b307-901c0903370f","Type":"ContainerStarted","Data":"32bbdf5a5c3d098942c5efad29db7a2848845651594d9dd88b4c81db2d4f844e"} Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.900364 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cpvgb" event={"ID":"1a65b1f7-9d89-4a8b-9af9-811495df5c5f","Type":"ContainerStarted","Data":"9d3920acae7fe5c944652cff0f11eb93140c482925d337bab689d0c0da9e998f"} Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.901724 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gbhbx" event={"ID":"89036e1f-6293-456d-ae24-6a52b2a102d9","Type":"ContainerStarted","Data":"5f370a4adb57002c99f0a408e0b6957de7f5cd7436da0a7e419e0f867d6aaa71"} Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.903701 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mjlwh" event={"ID":"be56ef14-c793-4e0a-82bb-4e29b4182e22","Type":"ContainerStarted","Data":"7d7b6d7def7d6e01b18f43a9c5c23e4efcad6d6c55e6010d539259b5dbe02917"} Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.905393 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fpcz4" event={"ID":"14395019-dadc-4326-8a88-3f8746438a60","Type":"ContainerStarted","Data":"cf6c82e16fec5e8b008f8e60f12b867cf080e3c90553296f08b2d2d2b64b4870"} Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.906558 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fdmpd" event={"ID":"792a5bfa-13bb-4e86-ab45-09dd184fcab3","Type":"ContainerStarted","Data":"201202b0ed8ad8a1179f2de2e944e3cbddbcf286cddc2ab92c9b895986266dd5"} Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.908642 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-nzlnv" event={"ID":"408af5cb-dfce-44ff-9b25-5378f194196f","Type":"ContainerStarted","Data":"6f929c5f544a9b6835a25236b804de82d2dd68fff5f71a95148d3a219068c126"} Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.910891 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert\") pod \"infra-operator-controller-manager-79955696d6-frg6k\" (UID: \"9a2f99f7-889a-4847-88f0-3241c2fa3353\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:15 crc kubenswrapper[4875]: E0130 17:10:15.911039 4875 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 17:10:15 crc kubenswrapper[4875]: E0130 17:10:15.911115 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert podName:9a2f99f7-889a-4847-88f0-3241c2fa3353 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:17.911097581 +0000 UTC m=+828.458460964 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert") pod "infra-operator-controller-manager-79955696d6-frg6k" (UID: "9a2f99f7-889a-4847-88f0-3241c2fa3353") : secret "infra-operator-webhook-server-cert" not found Jan 30 17:10:15 crc kubenswrapper[4875]: I0130 17:10:15.990777 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-h9cpk"] Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.175921 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-zxs9g"] Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.186670 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt"] Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.193236 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t42jw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-w75bt_openstack-operators(972271b3-306a-4015-be23-c1320e0c296e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.195523 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5c487c8746-9msld"] Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.195799 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt" podUID="972271b3-306a-4015-be23-c1320e0c296e" Jan 30 17:10:16 crc kubenswrapper[4875]: W0130 17:10:16.202307 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86be17ce_228e_46ba_84df_5134bdb00c99.slice/crio-a5bbe9ccac62997583397ee39d28ab0eaf8bf7f27072429ef7b9a7ccaa7efb2f WatchSource:0}: Error finding container a5bbe9ccac62997583397ee39d28ab0eaf8bf7f27072429ef7b9a7ccaa7efb2f: Status 404 returned error can't find the container with id a5bbe9ccac62997583397ee39d28ab0eaf8bf7f27072429ef7b9a7ccaa7efb2f Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.204747 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp"] Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.210804 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fc9wp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-ld7cp_openstack-operators(128260c8-c860-43f1-acd0-b5d9ed7d3f01): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.211049 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6scvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-nj2ss_openstack-operators(86be17ce-228e-46ba-84df-5134bdb00c99): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.211996 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp" podUID="128260c8-c860-43f1-acd0-b5d9ed7d3f01" Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.212053 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xx5rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-ns9pg_openstack-operators(921d8e30-00c8-43e3-b44a-4de9e4450ba2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.212114 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss" podUID="86be17ce-228e-46ba-84df-5134bdb00c99" Jan 30 17:10:16 crc kubenswrapper[4875]: W0130 17:10:16.212340 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3967345_0c3d_431b_8408_3f7beaba730d.slice/crio-398a164b01bda97e74966e1d970689712b02bd6eaa2ed999e188b2355617f1d4 WatchSource:0}: Error finding container 398a164b01bda97e74966e1d970689712b02bd6eaa2ed999e188b2355617f1d4: Status 404 returned error can't find the container with id 398a164b01bda97e74966e1d970689712b02bd6eaa2ed999e188b2355617f1d4 Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.213454 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg" podUID="921d8e30-00c8-43e3-b44a-4de9e4450ba2" Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.213883 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss"] Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.214632 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2\" (UID: \"59490e66-2646-4a95-9b81-e372fbd2f921\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.214869 4875 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.214948 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert podName:59490e66-2646-4a95-9b81-e372fbd2f921 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:18.214924574 +0000 UTC m=+828.762287957 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" (UID: "59490e66-2646-4a95-9b81-e372fbd2f921") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.216909 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hdnl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-8dhn6_openstack-operators(d3967345-0c3d-431b-8408-3f7beaba730d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.218124 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6" podUID="d3967345-0c3d-431b-8408-3f7beaba730d" Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.221848 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-snszj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-xnw72_openstack-operators(cefac6c5-5765-4646-a5c1-9832fb0170d6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.221993 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6"] Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.223771 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72" podUID="cefac6c5-5765-4646-a5c1-9832fb0170d6" Jan 30 17:10:16 crc kubenswrapper[4875]: W0130 17:10:16.225715 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66127cf7_84e7_4bb6_9830_936f7e20586d.slice/crio-05bb09adbeef6849b95efd6ef7962784257c7e5c7e74a4101725c8cb3cca33b0 WatchSource:0}: Error finding container 05bb09adbeef6849b95efd6ef7962784257c7e5c7e74a4101725c8cb3cca33b0: Status 404 returned error can't find the container with id 05bb09adbeef6849b95efd6ef7962784257c7e5c7e74a4101725c8cb3cca33b0 Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.226752 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72"] Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.228643 4875 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t8d9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-z4fxd_openstack-operators(66127cf7-84e7-4bb6-9830-936f7e20586d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.229826 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-z4fxd" podUID="66127cf7-84e7-4bb6-9830-936f7e20586d" Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.232333 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg"] Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.235445 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-z4fxd"] Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.621517 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.621574 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.621902 4875 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.621971 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs podName:662b188b-86ea-439e-a40b-6284d49e476e nodeName:}" failed. No retries permitted until 2026-01-30 17:10:18.621952384 +0000 UTC m=+829.169315767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs") pod "openstack-operator-controller-manager-6f764c8dd-9ntw2" (UID: "662b188b-86ea-439e-a40b-6284d49e476e") : secret "metrics-server-cert" not found Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.622046 4875 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.622085 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs podName:662b188b-86ea-439e-a40b-6284d49e476e nodeName:}" failed. No retries permitted until 2026-01-30 17:10:18.622075989 +0000 UTC m=+829.169439372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs") pod "openstack-operator-controller-manager-6f764c8dd-9ntw2" (UID: "662b188b-86ea-439e-a40b-6284d49e476e") : secret "webhook-server-cert" not found Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.929140 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" event={"ID":"bbef4553-54c5-4fcb-9868-49c67b9420b5","Type":"ContainerStarted","Data":"0fed439d60a3f680db875f1935c7769140ea294ddfd35ee7ea0fe26fe672ed46"} Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.931525 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg" event={"ID":"921d8e30-00c8-43e3-b44a-4de9e4450ba2","Type":"ContainerStarted","Data":"8f6de4d077e21c8cba3422c23b754cb8e866c512e46b228ce8983a1a05a2e461"} Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.933654 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zxs9g" event={"ID":"f4df6dfd-91eb-4d61-93fd-b93e111eb127","Type":"ContainerStarted","Data":"6d18e8dcfbc025cddaa49c028148a275e57f6cdfe3e2b3d0e2239e2f24565b3f"} Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.935159 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt" event={"ID":"972271b3-306a-4015-be23-c1320e0c296e","Type":"ContainerStarted","Data":"29085ef06fa4befaa72459ab6b142d6509bc51d592396bee083e4c80032e044b"} Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.935758 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg" podUID="921d8e30-00c8-43e3-b44a-4de9e4450ba2" Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.936058 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt" podUID="972271b3-306a-4015-be23-c1320e0c296e" Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.938479 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6" event={"ID":"d3967345-0c3d-431b-8408-3f7beaba730d","Type":"ContainerStarted","Data":"398a164b01bda97e74966e1d970689712b02bd6eaa2ed999e188b2355617f1d4"} Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.939441 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6" podUID="d3967345-0c3d-431b-8408-3f7beaba730d" Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.941037 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h9cpk" event={"ID":"044cc22a-35c3-49ac-8c70-80478ce3f670","Type":"ContainerStarted","Data":"c9382eaa63afaaa1e51ac8643b550b796017f142d81fa0edaa1112a40ecedf0d"} Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.950703 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-z4fxd" event={"ID":"66127cf7-84e7-4bb6-9830-936f7e20586d","Type":"ContainerStarted","Data":"05bb09adbeef6849b95efd6ef7962784257c7e5c7e74a4101725c8cb3cca33b0"} Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.953758 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-z4fxd" podUID="66127cf7-84e7-4bb6-9830-936f7e20586d" Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.962444 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72" event={"ID":"cefac6c5-5765-4646-a5c1-9832fb0170d6","Type":"ContainerStarted","Data":"cf6833d68fcaf10c4c075dc5b57dc52ffae830f5ba77c6c8d0a7e420a1879f22"} Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.964816 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp" event={"ID":"128260c8-c860-43f1-acd0-b5d9ed7d3f01","Type":"ContainerStarted","Data":"74f70ef69a9da97110be4fa8e60c7abcb779e2251de3908c063b6d94e8b19f05"} Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.965219 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72" podUID="cefac6c5-5765-4646-a5c1-9832fb0170d6" Jan 30 17:10:16 crc kubenswrapper[4875]: I0130 17:10:16.967066 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss" event={"ID":"86be17ce-228e-46ba-84df-5134bdb00c99","Type":"ContainerStarted","Data":"a5bbe9ccac62997583397ee39d28ab0eaf8bf7f27072429ef7b9a7ccaa7efb2f"} Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.969813 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp" podUID="128260c8-c860-43f1-acd0-b5d9ed7d3f01" Jan 30 17:10:16 crc kubenswrapper[4875]: E0130 17:10:16.970457 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss" podUID="86be17ce-228e-46ba-84df-5134bdb00c99" Jan 30 17:10:17 crc kubenswrapper[4875]: I0130 17:10:17.941970 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert\") pod \"infra-operator-controller-manager-79955696d6-frg6k\" (UID: \"9a2f99f7-889a-4847-88f0-3241c2fa3353\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:17 crc kubenswrapper[4875]: E0130 17:10:17.942241 4875 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 17:10:17 crc kubenswrapper[4875]: E0130 17:10:17.943614 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert podName:9a2f99f7-889a-4847-88f0-3241c2fa3353 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:21.943595952 +0000 UTC m=+832.490959335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert") pod "infra-operator-controller-manager-79955696d6-frg6k" (UID: "9a2f99f7-889a-4847-88f0-3241c2fa3353") : secret "infra-operator-webhook-server-cert" not found Jan 30 17:10:17 crc kubenswrapper[4875]: E0130 17:10:17.987042 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg" podUID="921d8e30-00c8-43e3-b44a-4de9e4450ba2" Jan 30 17:10:17 crc kubenswrapper[4875]: E0130 17:10:17.987108 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72" podUID="cefac6c5-5765-4646-a5c1-9832fb0170d6" Jan 30 17:10:17 crc kubenswrapper[4875]: E0130 17:10:17.987207 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss" podUID="86be17ce-228e-46ba-84df-5134bdb00c99" Jan 30 17:10:17 crc kubenswrapper[4875]: E0130 17:10:17.987282 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6" podUID="d3967345-0c3d-431b-8408-3f7beaba730d" Jan 30 17:10:17 crc kubenswrapper[4875]: E0130 17:10:17.987418 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-z4fxd" podUID="66127cf7-84e7-4bb6-9830-936f7e20586d" Jan 30 17:10:17 crc kubenswrapper[4875]: E0130 17:10:17.988709 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt" podUID="972271b3-306a-4015-be23-c1320e0c296e" Jan 30 17:10:17 crc kubenswrapper[4875]: E0130 17:10:17.988866 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp" podUID="128260c8-c860-43f1-acd0-b5d9ed7d3f01" Jan 30 17:10:18 crc kubenswrapper[4875]: I0130 17:10:18.248009 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2\" (UID: \"59490e66-2646-4a95-9b81-e372fbd2f921\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:18 crc kubenswrapper[4875]: E0130 17:10:18.248193 4875 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:10:18 crc kubenswrapper[4875]: E0130 17:10:18.248539 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert podName:59490e66-2646-4a95-9b81-e372fbd2f921 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:22.248517273 +0000 UTC m=+832.795880666 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" (UID: "59490e66-2646-4a95-9b81-e372fbd2f921") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:10:18 crc kubenswrapper[4875]: I0130 17:10:18.658651 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:18 crc kubenswrapper[4875]: I0130 17:10:18.658699 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:18 crc kubenswrapper[4875]: E0130 17:10:18.658807 4875 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 17:10:18 crc kubenswrapper[4875]: E0130 17:10:18.658854 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs podName:662b188b-86ea-439e-a40b-6284d49e476e nodeName:}" failed. No retries permitted until 2026-01-30 17:10:22.658839455 +0000 UTC m=+833.206202838 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs") pod "openstack-operator-controller-manager-6f764c8dd-9ntw2" (UID: "662b188b-86ea-439e-a40b-6284d49e476e") : secret "webhook-server-cert" not found Jan 30 17:10:18 crc kubenswrapper[4875]: E0130 17:10:18.659711 4875 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 17:10:18 crc kubenswrapper[4875]: E0130 17:10:18.659967 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs podName:662b188b-86ea-439e-a40b-6284d49e476e nodeName:}" failed. No retries permitted until 2026-01-30 17:10:22.659734195 +0000 UTC m=+833.207097578 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs") pod "openstack-operator-controller-manager-6f764c8dd-9ntw2" (UID: "662b188b-86ea-439e-a40b-6284d49e476e") : secret "metrics-server-cert" not found Jan 30 17:10:22 crc kubenswrapper[4875]: I0130 17:10:22.017294 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert\") pod \"infra-operator-controller-manager-79955696d6-frg6k\" (UID: \"9a2f99f7-889a-4847-88f0-3241c2fa3353\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:22 crc kubenswrapper[4875]: E0130 17:10:22.017924 4875 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 17:10:22 crc kubenswrapper[4875]: E0130 17:10:22.017982 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert podName:9a2f99f7-889a-4847-88f0-3241c2fa3353 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:30.017964273 +0000 UTC m=+840.565327656 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert") pod "infra-operator-controller-manager-79955696d6-frg6k" (UID: "9a2f99f7-889a-4847-88f0-3241c2fa3353") : secret "infra-operator-webhook-server-cert" not found Jan 30 17:10:22 crc kubenswrapper[4875]: I0130 17:10:22.321676 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2\" (UID: \"59490e66-2646-4a95-9b81-e372fbd2f921\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:22 crc kubenswrapper[4875]: E0130 17:10:22.321876 4875 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:10:22 crc kubenswrapper[4875]: E0130 17:10:22.321964 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert podName:59490e66-2646-4a95-9b81-e372fbd2f921 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:30.321943191 +0000 UTC m=+840.869306564 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" (UID: "59490e66-2646-4a95-9b81-e372fbd2f921") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:10:22 crc kubenswrapper[4875]: I0130 17:10:22.727253 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:22 crc kubenswrapper[4875]: I0130 17:10:22.727306 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:22 crc kubenswrapper[4875]: E0130 17:10:22.727450 4875 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 17:10:22 crc kubenswrapper[4875]: E0130 17:10:22.727484 4875 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 17:10:22 crc kubenswrapper[4875]: E0130 17:10:22.727525 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs podName:662b188b-86ea-439e-a40b-6284d49e476e nodeName:}" failed. No retries permitted until 2026-01-30 17:10:30.727508251 +0000 UTC m=+841.274871634 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs") pod "openstack-operator-controller-manager-6f764c8dd-9ntw2" (UID: "662b188b-86ea-439e-a40b-6284d49e476e") : secret "metrics-server-cert" not found Jan 30 17:10:22 crc kubenswrapper[4875]: E0130 17:10:22.727562 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs podName:662b188b-86ea-439e-a40b-6284d49e476e nodeName:}" failed. No retries permitted until 2026-01-30 17:10:30.727545542 +0000 UTC m=+841.274908925 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs") pod "openstack-operator-controller-manager-6f764c8dd-9ntw2" (UID: "662b188b-86ea-439e-a40b-6284d49e476e") : secret "webhook-server-cert" not found Jan 30 17:10:28 crc kubenswrapper[4875]: I0130 17:10:28.053414 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fpcz4" event={"ID":"14395019-dadc-4326-8a88-3f8746438a60","Type":"ContainerStarted","Data":"46e6535ecd79b07766de71dc27304cf4e0ca1c14a15ae7bc696cce5cd93f4daf"} Jan 30 17:10:28 crc kubenswrapper[4875]: I0130 17:10:28.054723 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fpcz4" Jan 30 17:10:28 crc kubenswrapper[4875]: I0130 17:10:28.085982 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fpcz4" podStartSLOduration=2.325799021 podStartE2EDuration="14.085965123s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:15.837559532 +0000 UTC m=+826.384922915" lastFinishedPulling="2026-01-30 17:10:27.597725634 +0000 UTC m=+838.145089017" observedRunningTime="2026-01-30 17:10:28.082920139 +0000 UTC m=+838.630283522" watchObservedRunningTime="2026-01-30 17:10:28.085965123 +0000 UTC m=+838.633328506" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.060268 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bvnzf" event={"ID":"d6508139-1b0b-45c7-b307-901c0903370f","Type":"ContainerStarted","Data":"97c9831ce9bffba7f2ae442ead67c1cdf2aeb9c290f40c35045232df7113da6b"} Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.061418 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bvnzf" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.062377 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cpvgb" event={"ID":"1a65b1f7-9d89-4a8b-9af9-811495df5c5f","Type":"ContainerStarted","Data":"d24c98b6a2e0ec193534c134b4c723c990dd5d8477139d6c77b7fc72526b666b"} Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.062866 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cpvgb" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.064456 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-dm9v4" event={"ID":"4d112d50-a873-440f-b366-332c135cd9cf","Type":"ContainerStarted","Data":"5457f765c43c9c291803d41ef2331ea5f47d4097402254e1ddc6ff0395d59ff6"} Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.065998 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-nzlnv" event={"ID":"408af5cb-dfce-44ff-9b25-5378f194196f","Type":"ContainerStarted","Data":"98340a7e8049e849d5c58634aa9b0d09ce823c7a7398699d0d1bb0584499bd68"} Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.066612 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-nzlnv" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.068023 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gbhbx" event={"ID":"89036e1f-6293-456d-ae24-6a52b2a102d9","Type":"ContainerStarted","Data":"797251d7d9458130eee1bc546edcf86d23dc3eaa8d6253bf948aed149eb7742e"} Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.068504 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gbhbx" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.069800 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fdmpd" event={"ID":"792a5bfa-13bb-4e86-ab45-09dd184fcab3","Type":"ContainerStarted","Data":"982cd9b56de1bb733acd1c999982be27f87f220f3629accc691add4575b1168c"} Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.070317 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fdmpd" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.071711 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-d74js" event={"ID":"a8c14e5e-0827-45c6-8e21-c524ad39fb11","Type":"ContainerStarted","Data":"6c62a9164287d8ef98d0e040dd63a81eea36ed175eb8040e063f408eab298db2"} Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.072188 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-d74js" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.073639 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" event={"ID":"bbef4553-54c5-4fcb-9868-49c67b9420b5","Type":"ContainerStarted","Data":"4235275f9be82d0a6f0f012a96bcf0afe01ab85652f997b514716e94b502ade4"} Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.074042 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.075502 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zxs9g" event={"ID":"f4df6dfd-91eb-4d61-93fd-b93e111eb127","Type":"ContainerStarted","Data":"7ce65a8ad445db06a7fc764ddd60460344eebe3fc4f868e92a71d9c51d7ab40d"} Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.075939 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zxs9g" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.077237 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-znpxc" event={"ID":"daa61e94-524b-445a-8086-63a4a3db6764","Type":"ContainerStarted","Data":"d7f4c92756ea7ccd0d03ddb0049dec0129fb2969d3eae3e4df32dc8a451fd995"} Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.077941 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-znpxc" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.079757 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mjlwh" event={"ID":"be56ef14-c793-4e0a-82bb-4e29b4182e22","Type":"ContainerStarted","Data":"f8e2170b3b82fefbf8b620b6cbc3ff96ece02d867f71d0458796fb734b3ee049"} Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.080222 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mjlwh" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.082102 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h9cpk" event={"ID":"044cc22a-35c3-49ac-8c70-80478ce3f670","Type":"ContainerStarted","Data":"0e04268a108b1c304fa807a6016a6cc83acae34eb8cf9d33fb8e7ce6fdc8082e"} Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.082276 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h9cpk" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.118744 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bvnzf" podStartSLOduration=3.342714295 podStartE2EDuration="15.118723344s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:15.828113621 +0000 UTC m=+826.375477004" lastFinishedPulling="2026-01-30 17:10:27.60412267 +0000 UTC m=+838.151486053" observedRunningTime="2026-01-30 17:10:29.090841767 +0000 UTC m=+839.638205170" watchObservedRunningTime="2026-01-30 17:10:29.118723344 +0000 UTC m=+839.666086727" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.121038 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-d74js" podStartSLOduration=3.204635822 podStartE2EDuration="15.121029402s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:15.806180815 +0000 UTC m=+826.353544198" lastFinishedPulling="2026-01-30 17:10:27.722574395 +0000 UTC m=+838.269937778" observedRunningTime="2026-01-30 17:10:29.119973627 +0000 UTC m=+839.667337010" watchObservedRunningTime="2026-01-30 17:10:29.121029402 +0000 UTC m=+839.668392785" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.154326 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h9cpk" podStartSLOduration=3.5376875009999997 podStartE2EDuration="15.154304504s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:15.995747837 +0000 UTC m=+826.543111220" lastFinishedPulling="2026-01-30 17:10:27.61236484 +0000 UTC m=+838.159728223" observedRunningTime="2026-01-30 17:10:29.147182281 +0000 UTC m=+839.694545674" watchObservedRunningTime="2026-01-30 17:10:29.154304504 +0000 UTC m=+839.701667897" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.194176 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-nzlnv" podStartSLOduration=3.417039279 podStartE2EDuration="15.194160907s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:15.815073808 +0000 UTC m=+826.362437191" lastFinishedPulling="2026-01-30 17:10:27.592195436 +0000 UTC m=+838.139558819" observedRunningTime="2026-01-30 17:10:29.191383603 +0000 UTC m=+839.738746986" watchObservedRunningTime="2026-01-30 17:10:29.194160907 +0000 UTC m=+839.741524290" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.194689 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zxs9g" podStartSLOduration=3.768834983 podStartE2EDuration="15.194683855s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:16.188521727 +0000 UTC m=+826.735885110" lastFinishedPulling="2026-01-30 17:10:27.614370599 +0000 UTC m=+838.161733982" observedRunningTime="2026-01-30 17:10:29.170720111 +0000 UTC m=+839.718083494" watchObservedRunningTime="2026-01-30 17:10:29.194683855 +0000 UTC m=+839.742047248" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.228135 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fdmpd" podStartSLOduration=3.436564754 podStartE2EDuration="15.228109481s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:15.821810327 +0000 UTC m=+826.369173710" lastFinishedPulling="2026-01-30 17:10:27.613355054 +0000 UTC m=+838.160718437" observedRunningTime="2026-01-30 17:10:29.220154401 +0000 UTC m=+839.767517784" watchObservedRunningTime="2026-01-30 17:10:29.228109481 +0000 UTC m=+839.775472864" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.249248 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-dm9v4" podStartSLOduration=3.248428761 podStartE2EDuration="15.249206928s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:15.722658498 +0000 UTC m=+826.270021871" lastFinishedPulling="2026-01-30 17:10:27.723436655 +0000 UTC m=+838.270800038" observedRunningTime="2026-01-30 17:10:29.24604012 +0000 UTC m=+839.793403503" watchObservedRunningTime="2026-01-30 17:10:29.249206928 +0000 UTC m=+839.796570311" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.265885 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gbhbx" podStartSLOduration=3.095993682 podStartE2EDuration="15.265865164s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:15.444073502 +0000 UTC m=+825.991436895" lastFinishedPulling="2026-01-30 17:10:27.613944994 +0000 UTC m=+838.161308377" observedRunningTime="2026-01-30 17:10:29.259446616 +0000 UTC m=+839.806809999" watchObservedRunningTime="2026-01-30 17:10:29.265865164 +0000 UTC m=+839.813228547" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.296171 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-znpxc" podStartSLOduration=3.22512743 podStartE2EDuration="15.296153773s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:15.705168204 +0000 UTC m=+826.252531587" lastFinishedPulling="2026-01-30 17:10:27.776194547 +0000 UTC m=+838.323557930" observedRunningTime="2026-01-30 17:10:29.290326795 +0000 UTC m=+839.837690178" watchObservedRunningTime="2026-01-30 17:10:29.296153773 +0000 UTC m=+839.843517156" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.314390 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" podStartSLOduration=3.9140850780000003 podStartE2EDuration="15.314367491s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:16.192465701 +0000 UTC m=+826.739829084" lastFinishedPulling="2026-01-30 17:10:27.592748114 +0000 UTC m=+838.140111497" observedRunningTime="2026-01-30 17:10:29.30873163 +0000 UTC m=+839.856095013" watchObservedRunningTime="2026-01-30 17:10:29.314367491 +0000 UTC m=+839.861730884" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.332735 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cpvgb" podStartSLOduration=3.387092483 podStartE2EDuration="15.332716846s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:15.776913001 +0000 UTC m=+826.324276384" lastFinishedPulling="2026-01-30 17:10:27.722537344 +0000 UTC m=+838.269900747" observedRunningTime="2026-01-30 17:10:29.331071209 +0000 UTC m=+839.878434592" watchObservedRunningTime="2026-01-30 17:10:29.332716846 +0000 UTC m=+839.880080229" Jan 30 17:10:29 crc kubenswrapper[4875]: I0130 17:10:29.354176 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mjlwh" podStartSLOduration=3.066484279 podStartE2EDuration="15.354160334s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:15.435533022 +0000 UTC m=+825.982896405" lastFinishedPulling="2026-01-30 17:10:27.723209077 +0000 UTC m=+838.270572460" observedRunningTime="2026-01-30 17:10:29.348299944 +0000 UTC m=+839.895663327" watchObservedRunningTime="2026-01-30 17:10:29.354160334 +0000 UTC m=+839.901523717" Jan 30 17:10:30 crc kubenswrapper[4875]: I0130 17:10:30.031523 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert\") pod \"infra-operator-controller-manager-79955696d6-frg6k\" (UID: \"9a2f99f7-889a-4847-88f0-3241c2fa3353\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:30 crc kubenswrapper[4875]: E0130 17:10:30.031816 4875 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 17:10:30 crc kubenswrapper[4875]: E0130 17:10:30.031877 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert podName:9a2f99f7-889a-4847-88f0-3241c2fa3353 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:46.031858361 +0000 UTC m=+856.579221754 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert") pod "infra-operator-controller-manager-79955696d6-frg6k" (UID: "9a2f99f7-889a-4847-88f0-3241c2fa3353") : secret "infra-operator-webhook-server-cert" not found Jan 30 17:10:30 crc kubenswrapper[4875]: I0130 17:10:30.116136 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-dm9v4" Jan 30 17:10:30 crc kubenswrapper[4875]: I0130 17:10:30.337700 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2\" (UID: \"59490e66-2646-4a95-9b81-e372fbd2f921\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:30 crc kubenswrapper[4875]: E0130 17:10:30.337791 4875 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:10:30 crc kubenswrapper[4875]: E0130 17:10:30.337882 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert podName:59490e66-2646-4a95-9b81-e372fbd2f921 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:46.337855038 +0000 UTC m=+856.885218411 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" (UID: "59490e66-2646-4a95-9b81-e372fbd2f921") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:10:30 crc kubenswrapper[4875]: I0130 17:10:30.743370 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:30 crc kubenswrapper[4875]: I0130 17:10:30.743797 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:30 crc kubenswrapper[4875]: E0130 17:10:30.743562 4875 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 17:10:30 crc kubenswrapper[4875]: E0130 17:10:30.744025 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs podName:662b188b-86ea-439e-a40b-6284d49e476e nodeName:}" failed. No retries permitted until 2026-01-30 17:10:46.744008249 +0000 UTC m=+857.291371632 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs") pod "openstack-operator-controller-manager-6f764c8dd-9ntw2" (UID: "662b188b-86ea-439e-a40b-6284d49e476e") : secret "metrics-server-cert" not found Jan 30 17:10:30 crc kubenswrapper[4875]: E0130 17:10:30.743970 4875 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 17:10:30 crc kubenswrapper[4875]: E0130 17:10:30.744065 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs podName:662b188b-86ea-439e-a40b-6284d49e476e nodeName:}" failed. No retries permitted until 2026-01-30 17:10:46.74405443 +0000 UTC m=+857.291417813 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs") pod "openstack-operator-controller-manager-6f764c8dd-9ntw2" (UID: "662b188b-86ea-439e-a40b-6284d49e476e") : secret "webhook-server-cert" not found Jan 30 17:10:34 crc kubenswrapper[4875]: I0130 17:10:34.479960 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-mjlwh" Jan 30 17:10:34 crc kubenswrapper[4875]: I0130 17:10:34.490946 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-dm9v4" Jan 30 17:10:34 crc kubenswrapper[4875]: I0130 17:10:34.503629 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-gbhbx" Jan 30 17:10:34 crc kubenswrapper[4875]: I0130 17:10:34.529561 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-znpxc" Jan 30 17:10:34 crc kubenswrapper[4875]: I0130 17:10:34.543171 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-bvnzf" Jan 30 17:10:34 crc kubenswrapper[4875]: I0130 17:10:34.565396 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fpcz4" Jan 30 17:10:34 crc kubenswrapper[4875]: I0130 17:10:34.635231 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-cpvgb" Jan 30 17:10:34 crc kubenswrapper[4875]: I0130 17:10:34.659427 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fdmpd" Jan 30 17:10:34 crc kubenswrapper[4875]: I0130 17:10:34.671206 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-nzlnv" Jan 30 17:10:34 crc kubenswrapper[4875]: I0130 17:10:34.682137 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-d74js" Jan 30 17:10:34 crc kubenswrapper[4875]: I0130 17:10:34.769896 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" Jan 30 17:10:34 crc kubenswrapper[4875]: I0130 17:10:34.791416 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-h9cpk" Jan 30 17:10:34 crc kubenswrapper[4875]: I0130 17:10:34.872084 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zxs9g" Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.172851 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg" event={"ID":"921d8e30-00c8-43e3-b44a-4de9e4450ba2","Type":"ContainerStarted","Data":"845191d888e77b0193f7c16bc10439d5ae62c3c3c0a3124ea435098d9f0af66b"} Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.173562 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg" Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.187599 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt" event={"ID":"972271b3-306a-4015-be23-c1320e0c296e","Type":"ContainerStarted","Data":"bfd151d87bdf763e85cadf498ad88aec8157ea35c9c9f43cff6c0a05a15b9802"} Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.187855 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt" Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.189285 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-z4fxd" event={"ID":"66127cf7-84e7-4bb6-9830-936f7e20586d","Type":"ContainerStarted","Data":"4a28eaea06ef884642b1fddd4406be27e5e6ec76e45718625a910f67faef27f3"} Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.189768 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-z4fxd" Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.191854 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72" event={"ID":"cefac6c5-5765-4646-a5c1-9832fb0170d6","Type":"ContainerStarted","Data":"49d867b792248ee48df28041a5bd7c563938250b952344ea97f2f1e61854f370"} Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.192517 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72" Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.202463 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp" event={"ID":"128260c8-c860-43f1-acd0-b5d9ed7d3f01","Type":"ContainerStarted","Data":"217cee95a8f2c4a36b9b10c1d8fcd5ed11093cb8f015fb7c0118a5d1013040c3"} Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.203654 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp" Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.209519 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6" event={"ID":"d3967345-0c3d-431b-8408-3f7beaba730d","Type":"ContainerStarted","Data":"c730c8320374d92217b3996ea86c0c25ec281887fe1bcae7d4112e9c00b03496"} Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.209858 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6" Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.210853 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss" event={"ID":"86be17ce-228e-46ba-84df-5134bdb00c99","Type":"ContainerStarted","Data":"3864cf008412ae28e4c5e12212c557616619325b4551b7b9e9739a944ffa4e97"} Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.216391 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg" podStartSLOduration=5.082114687 podStartE2EDuration="24.216374742s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:16.211946334 +0000 UTC m=+826.759309717" lastFinishedPulling="2026-01-30 17:10:35.346206389 +0000 UTC m=+845.893569772" observedRunningTime="2026-01-30 17:10:38.215261894 +0000 UTC m=+848.762625297" watchObservedRunningTime="2026-01-30 17:10:38.216374742 +0000 UTC m=+848.763738125" Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.232298 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp" podStartSLOduration=4.668171139 podStartE2EDuration="24.232280962s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:16.210642219 +0000 UTC m=+826.758005602" lastFinishedPulling="2026-01-30 17:10:35.774752042 +0000 UTC m=+846.322115425" observedRunningTime="2026-01-30 17:10:38.230049695 +0000 UTC m=+848.777413078" watchObservedRunningTime="2026-01-30 17:10:38.232280962 +0000 UTC m=+848.779644345" Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.247269 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt" podStartSLOduration=4.659032693 podStartE2EDuration="24.247249229s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:16.193130253 +0000 UTC m=+826.740493636" lastFinishedPulling="2026-01-30 17:10:35.781346789 +0000 UTC m=+846.328710172" observedRunningTime="2026-01-30 17:10:38.245809729 +0000 UTC m=+848.793173132" watchObservedRunningTime="2026-01-30 17:10:38.247249229 +0000 UTC m=+848.794612612" Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.259881 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6" podStartSLOduration=4.652371709 podStartE2EDuration="24.259863674s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:16.216807218 +0000 UTC m=+826.764170591" lastFinishedPulling="2026-01-30 17:10:35.824299173 +0000 UTC m=+846.371662556" observedRunningTime="2026-01-30 17:10:38.259644827 +0000 UTC m=+848.807008210" watchObservedRunningTime="2026-01-30 17:10:38.259863674 +0000 UTC m=+848.807227057" Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.278979 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72" podStartSLOduration=5.205850036 podStartE2EDuration="24.278956014s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:16.221720345 +0000 UTC m=+826.769083728" lastFinishedPulling="2026-01-30 17:10:35.294826323 +0000 UTC m=+845.842189706" observedRunningTime="2026-01-30 17:10:38.275636249 +0000 UTC m=+848.822999632" watchObservedRunningTime="2026-01-30 17:10:38.278956014 +0000 UTC m=+848.826319397" Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.304288 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-z4fxd" podStartSLOduration=4.681878791 podStartE2EDuration="24.304236967s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:16.228459674 +0000 UTC m=+826.775823057" lastFinishedPulling="2026-01-30 17:10:35.85081785 +0000 UTC m=+846.398181233" observedRunningTime="2026-01-30 17:10:38.299622128 +0000 UTC m=+848.846985511" watchObservedRunningTime="2026-01-30 17:10:38.304236967 +0000 UTC m=+848.851600350" Jan 30 17:10:38 crc kubenswrapper[4875]: I0130 17:10:38.314491 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nj2ss" podStartSLOduration=5.230541384 podStartE2EDuration="24.314473691s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:16.210926318 +0000 UTC m=+826.758289701" lastFinishedPulling="2026-01-30 17:10:35.294858615 +0000 UTC m=+845.842222008" observedRunningTime="2026-01-30 17:10:38.31097439 +0000 UTC m=+848.858337773" watchObservedRunningTime="2026-01-30 17:10:38.314473691 +0000 UTC m=+848.861837074" Jan 30 17:10:44 crc kubenswrapper[4875]: I0130 17:10:44.754262 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-w75bt" Jan 30 17:10:44 crc kubenswrapper[4875]: I0130 17:10:44.822724 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xnw72" Jan 30 17:10:44 crc kubenswrapper[4875]: I0130 17:10:44.831930 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-8dhn6" Jan 30 17:10:44 crc kubenswrapper[4875]: I0130 17:10:44.909474 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-ns9pg" Jan 30 17:10:45 crc kubenswrapper[4875]: I0130 17:10:45.042283 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ld7cp" Jan 30 17:10:45 crc kubenswrapper[4875]: I0130 17:10:45.081149 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-z4fxd" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.077051 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert\") pod \"infra-operator-controller-manager-79955696d6-frg6k\" (UID: \"9a2f99f7-889a-4847-88f0-3241c2fa3353\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.083628 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a2f99f7-889a-4847-88f0-3241c2fa3353-cert\") pod \"infra-operator-controller-manager-79955696d6-frg6k\" (UID: \"9a2f99f7-889a-4847-88f0-3241c2fa3353\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.380294 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2\" (UID: \"59490e66-2646-4a95-9b81-e372fbd2f921\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.380616 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-l9rmf" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.384864 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/59490e66-2646-4a95-9b81-e372fbd2f921-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2\" (UID: \"59490e66-2646-4a95-9b81-e372fbd2f921\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.388438 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.616626 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cdz4h" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.624885 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.777821 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-frg6k"] Jan 30 17:10:46 crc kubenswrapper[4875]: W0130 17:10:46.781560 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a2f99f7_889a_4847_88f0_3241c2fa3353.slice/crio-808a49f67be868b3fe0c0514e4ee278736a957262708ed1782096d58a071f51d WatchSource:0}: Error finding container 808a49f67be868b3fe0c0514e4ee278736a957262708ed1782096d58a071f51d: Status 404 returned error can't find the container with id 808a49f67be868b3fe0c0514e4ee278736a957262708ed1782096d58a071f51d Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.787720 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.787768 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.791952 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-webhook-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.792646 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/662b188b-86ea-439e-a40b-6284d49e476e-metrics-certs\") pod \"openstack-operator-controller-manager-6f764c8dd-9ntw2\" (UID: \"662b188b-86ea-439e-a40b-6284d49e476e\") " pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.970810 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-g5nxz" Jan 30 17:10:46 crc kubenswrapper[4875]: I0130 17:10:46.979319 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:47 crc kubenswrapper[4875]: I0130 17:10:47.060601 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2"] Jan 30 17:10:47 crc kubenswrapper[4875]: I0130 17:10:47.262449 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" event={"ID":"9a2f99f7-889a-4847-88f0-3241c2fa3353","Type":"ContainerStarted","Data":"808a49f67be868b3fe0c0514e4ee278736a957262708ed1782096d58a071f51d"} Jan 30 17:10:47 crc kubenswrapper[4875]: I0130 17:10:47.263398 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" event={"ID":"59490e66-2646-4a95-9b81-e372fbd2f921","Type":"ContainerStarted","Data":"88dee8f6898210e0e32f5bdb6388a9e1ae3d7215b2f84215df4b71183b3d888a"} Jan 30 17:10:47 crc kubenswrapper[4875]: I0130 17:10:47.513652 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2"] Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.270357 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" event={"ID":"662b188b-86ea-439e-a40b-6284d49e476e","Type":"ContainerStarted","Data":"ab3d05272985719344067f4b72a161097b7349ad8ca4f3c2c5481efd6b5556aa"} Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.443749 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-scp96"] Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.445388 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.461940 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-scp96"] Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.611906 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s6ck\" (UniqueName: \"kubernetes.io/projected/d416aef0-20fc-4419-9662-6e7933e12684-kube-api-access-9s6ck\") pod \"certified-operators-scp96\" (UID: \"d416aef0-20fc-4419-9662-6e7933e12684\") " pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.612240 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d416aef0-20fc-4419-9662-6e7933e12684-utilities\") pod \"certified-operators-scp96\" (UID: \"d416aef0-20fc-4419-9662-6e7933e12684\") " pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.612307 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d416aef0-20fc-4419-9662-6e7933e12684-catalog-content\") pod \"certified-operators-scp96\" (UID: \"d416aef0-20fc-4419-9662-6e7933e12684\") " pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.713217 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s6ck\" (UniqueName: \"kubernetes.io/projected/d416aef0-20fc-4419-9662-6e7933e12684-kube-api-access-9s6ck\") pod \"certified-operators-scp96\" (UID: \"d416aef0-20fc-4419-9662-6e7933e12684\") " pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.713264 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d416aef0-20fc-4419-9662-6e7933e12684-utilities\") pod \"certified-operators-scp96\" (UID: \"d416aef0-20fc-4419-9662-6e7933e12684\") " pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.713340 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d416aef0-20fc-4419-9662-6e7933e12684-catalog-content\") pod \"certified-operators-scp96\" (UID: \"d416aef0-20fc-4419-9662-6e7933e12684\") " pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.713883 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d416aef0-20fc-4419-9662-6e7933e12684-catalog-content\") pod \"certified-operators-scp96\" (UID: \"d416aef0-20fc-4419-9662-6e7933e12684\") " pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.714030 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d416aef0-20fc-4419-9662-6e7933e12684-utilities\") pod \"certified-operators-scp96\" (UID: \"d416aef0-20fc-4419-9662-6e7933e12684\") " pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.744624 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s6ck\" (UniqueName: \"kubernetes.io/projected/d416aef0-20fc-4419-9662-6e7933e12684-kube-api-access-9s6ck\") pod \"certified-operators-scp96\" (UID: \"d416aef0-20fc-4419-9662-6e7933e12684\") " pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:48 crc kubenswrapper[4875]: I0130 17:10:48.764395 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:49 crc kubenswrapper[4875]: I0130 17:10:49.248303 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-scp96"] Jan 30 17:10:49 crc kubenswrapper[4875]: I0130 17:10:49.277605 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" event={"ID":"662b188b-86ea-439e-a40b-6284d49e476e","Type":"ContainerStarted","Data":"c1332c7424d91f0c23df970e9682f9cc19b912dd14c9f4a187e14ebedaa2a001"} Jan 30 17:10:49 crc kubenswrapper[4875]: I0130 17:10:49.277713 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:49 crc kubenswrapper[4875]: I0130 17:10:49.280457 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scp96" event={"ID":"d416aef0-20fc-4419-9662-6e7933e12684","Type":"ContainerStarted","Data":"a431031739a0660e71070435f6cde3b70ccae931e3035c79a0f3417d4a29a56b"} Jan 30 17:10:49 crc kubenswrapper[4875]: I0130 17:10:49.309815 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" podStartSLOduration=35.309795851 podStartE2EDuration="35.309795851s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:10:49.309730708 +0000 UTC m=+859.857094091" watchObservedRunningTime="2026-01-30 17:10:49.309795851 +0000 UTC m=+859.857159234" Jan 30 17:10:50 crc kubenswrapper[4875]: I0130 17:10:50.294955 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scp96" event={"ID":"d416aef0-20fc-4419-9662-6e7933e12684","Type":"ContainerStarted","Data":"a7838fe7347df0343dc24453b4dfbf4d2f9782d8e3d2cf152c5f364badfda847"} Jan 30 17:10:51 crc kubenswrapper[4875]: I0130 17:10:51.303167 4875 generic.go:334] "Generic (PLEG): container finished" podID="d416aef0-20fc-4419-9662-6e7933e12684" containerID="a7838fe7347df0343dc24453b4dfbf4d2f9782d8e3d2cf152c5f364badfda847" exitCode=0 Jan 30 17:10:51 crc kubenswrapper[4875]: I0130 17:10:51.303274 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scp96" event={"ID":"d416aef0-20fc-4419-9662-6e7933e12684","Type":"ContainerDied","Data":"a7838fe7347df0343dc24453b4dfbf4d2f9782d8e3d2cf152c5f364badfda847"} Jan 30 17:10:54 crc kubenswrapper[4875]: I0130 17:10:54.328437 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scp96" event={"ID":"d416aef0-20fc-4419-9662-6e7933e12684","Type":"ContainerStarted","Data":"2dd18e0951aa6bff5c354933c4d1f8009c842739ae73fe37d9e5e66c8adeee52"} Jan 30 17:10:54 crc kubenswrapper[4875]: I0130 17:10:54.331915 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" event={"ID":"59490e66-2646-4a95-9b81-e372fbd2f921","Type":"ContainerStarted","Data":"8aded2c837c017828344da60cdaa7f76b28801c5a08b9b886d8d798de6358e6d"} Jan 30 17:10:54 crc kubenswrapper[4875]: I0130 17:10:54.332106 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:10:55 crc kubenswrapper[4875]: I0130 17:10:55.342176 4875 generic.go:334] "Generic (PLEG): container finished" podID="d416aef0-20fc-4419-9662-6e7933e12684" containerID="2dd18e0951aa6bff5c354933c4d1f8009c842739ae73fe37d9e5e66c8adeee52" exitCode=0 Jan 30 17:10:55 crc kubenswrapper[4875]: I0130 17:10:55.342360 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scp96" event={"ID":"d416aef0-20fc-4419-9662-6e7933e12684","Type":"ContainerDied","Data":"2dd18e0951aa6bff5c354933c4d1f8009c842739ae73fe37d9e5e66c8adeee52"} Jan 30 17:10:55 crc kubenswrapper[4875]: I0130 17:10:55.358641 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" podStartSLOduration=34.436855665 podStartE2EDuration="41.358626034s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:47.071949498 +0000 UTC m=+857.619312891" lastFinishedPulling="2026-01-30 17:10:53.993719877 +0000 UTC m=+864.541083260" observedRunningTime="2026-01-30 17:10:54.38085524 +0000 UTC m=+864.928218623" watchObservedRunningTime="2026-01-30 17:10:55.358626034 +0000 UTC m=+865.905989417" Jan 30 17:10:55 crc kubenswrapper[4875]: I0130 17:10:55.626288 4875 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:10:56 crc kubenswrapper[4875]: I0130 17:10:56.362829 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" event={"ID":"9a2f99f7-889a-4847-88f0-3241c2fa3353","Type":"ContainerStarted","Data":"b352efd5f345ae02e93fbbba6cd0e303149f4728fe9e5a8b8cc4d6a52bb65b75"} Jan 30 17:10:56 crc kubenswrapper[4875]: I0130 17:10:56.363293 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:10:56 crc kubenswrapper[4875]: I0130 17:10:56.383368 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" podStartSLOduration=33.460302944 podStartE2EDuration="42.383347181s" podCreationTimestamp="2026-01-30 17:10:14 +0000 UTC" firstStartedPulling="2026-01-30 17:10:46.783955331 +0000 UTC m=+857.331318714" lastFinishedPulling="2026-01-30 17:10:55.706999568 +0000 UTC m=+866.254362951" observedRunningTime="2026-01-30 17:10:56.382529373 +0000 UTC m=+866.929892756" watchObservedRunningTime="2026-01-30 17:10:56.383347181 +0000 UTC m=+866.930710564" Jan 30 17:10:56 crc kubenswrapper[4875]: I0130 17:10:56.985602 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6f764c8dd-9ntw2" Jan 30 17:10:57 crc kubenswrapper[4875]: I0130 17:10:57.370423 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scp96" event={"ID":"d416aef0-20fc-4419-9662-6e7933e12684","Type":"ContainerStarted","Data":"40f346cba46d8213f44a78239af0cae3f20ae9ac42be073663e3747bb3e27b12"} Jan 30 17:10:57 crc kubenswrapper[4875]: I0130 17:10:57.392258 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-scp96" podStartSLOduration=4.109215029 podStartE2EDuration="9.392240671s" podCreationTimestamp="2026-01-30 17:10:48 +0000 UTC" firstStartedPulling="2026-01-30 17:10:51.30501623 +0000 UTC m=+861.852379613" lastFinishedPulling="2026-01-30 17:10:56.588041872 +0000 UTC m=+867.135405255" observedRunningTime="2026-01-30 17:10:57.388040367 +0000 UTC m=+867.935403750" watchObservedRunningTime="2026-01-30 17:10:57.392240671 +0000 UTC m=+867.939604064" Jan 30 17:10:58 crc kubenswrapper[4875]: I0130 17:10:58.765110 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:58 crc kubenswrapper[4875]: I0130 17:10:58.765458 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:58 crc kubenswrapper[4875]: I0130 17:10:58.805946 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:10:59 crc kubenswrapper[4875]: I0130 17:10:59.780421 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nfdj6"] Jan 30 17:10:59 crc kubenswrapper[4875]: I0130 17:10:59.782084 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:10:59 crc kubenswrapper[4875]: I0130 17:10:59.796026 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfdj6"] Jan 30 17:10:59 crc kubenswrapper[4875]: I0130 17:10:59.868167 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5155590d-0df8-4bbf-8010-237a976cc0e9-catalog-content\") pod \"redhat-marketplace-nfdj6\" (UID: \"5155590d-0df8-4bbf-8010-237a976cc0e9\") " pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:10:59 crc kubenswrapper[4875]: I0130 17:10:59.868228 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pbh2\" (UniqueName: \"kubernetes.io/projected/5155590d-0df8-4bbf-8010-237a976cc0e9-kube-api-access-5pbh2\") pod \"redhat-marketplace-nfdj6\" (UID: \"5155590d-0df8-4bbf-8010-237a976cc0e9\") " pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:10:59 crc kubenswrapper[4875]: I0130 17:10:59.868264 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5155590d-0df8-4bbf-8010-237a976cc0e9-utilities\") pod \"redhat-marketplace-nfdj6\" (UID: \"5155590d-0df8-4bbf-8010-237a976cc0e9\") " pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:10:59 crc kubenswrapper[4875]: I0130 17:10:59.969893 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5155590d-0df8-4bbf-8010-237a976cc0e9-utilities\") pod \"redhat-marketplace-nfdj6\" (UID: \"5155590d-0df8-4bbf-8010-237a976cc0e9\") " pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:10:59 crc kubenswrapper[4875]: I0130 17:10:59.970001 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5155590d-0df8-4bbf-8010-237a976cc0e9-catalog-content\") pod \"redhat-marketplace-nfdj6\" (UID: \"5155590d-0df8-4bbf-8010-237a976cc0e9\") " pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:10:59 crc kubenswrapper[4875]: I0130 17:10:59.970034 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pbh2\" (UniqueName: \"kubernetes.io/projected/5155590d-0df8-4bbf-8010-237a976cc0e9-kube-api-access-5pbh2\") pod \"redhat-marketplace-nfdj6\" (UID: \"5155590d-0df8-4bbf-8010-237a976cc0e9\") " pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:10:59 crc kubenswrapper[4875]: I0130 17:10:59.970466 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5155590d-0df8-4bbf-8010-237a976cc0e9-utilities\") pod \"redhat-marketplace-nfdj6\" (UID: \"5155590d-0df8-4bbf-8010-237a976cc0e9\") " pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:10:59 crc kubenswrapper[4875]: I0130 17:10:59.970739 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5155590d-0df8-4bbf-8010-237a976cc0e9-catalog-content\") pod \"redhat-marketplace-nfdj6\" (UID: \"5155590d-0df8-4bbf-8010-237a976cc0e9\") " pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:10:59 crc kubenswrapper[4875]: I0130 17:10:59.989981 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pbh2\" (UniqueName: \"kubernetes.io/projected/5155590d-0df8-4bbf-8010-237a976cc0e9-kube-api-access-5pbh2\") pod \"redhat-marketplace-nfdj6\" (UID: \"5155590d-0df8-4bbf-8010-237a976cc0e9\") " pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:11:00 crc kubenswrapper[4875]: I0130 17:11:00.099835 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:11:00 crc kubenswrapper[4875]: I0130 17:11:00.538970 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfdj6"] Jan 30 17:11:00 crc kubenswrapper[4875]: W0130 17:11:00.542683 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5155590d_0df8_4bbf_8010_237a976cc0e9.slice/crio-429097ee6bc8ab06cf4de1b98c698aa2309a46260247ac039fc33ff84b181505 WatchSource:0}: Error finding container 429097ee6bc8ab06cf4de1b98c698aa2309a46260247ac039fc33ff84b181505: Status 404 returned error can't find the container with id 429097ee6bc8ab06cf4de1b98c698aa2309a46260247ac039fc33ff84b181505 Jan 30 17:11:01 crc kubenswrapper[4875]: I0130 17:11:01.398467 4875 generic.go:334] "Generic (PLEG): container finished" podID="5155590d-0df8-4bbf-8010-237a976cc0e9" containerID="306cbe63187b4027521531669a32ad3a9bfa8762d77cd183d3bb39361df79e0a" exitCode=0 Jan 30 17:11:01 crc kubenswrapper[4875]: I0130 17:11:01.398515 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfdj6" event={"ID":"5155590d-0df8-4bbf-8010-237a976cc0e9","Type":"ContainerDied","Data":"306cbe63187b4027521531669a32ad3a9bfa8762d77cd183d3bb39361df79e0a"} Jan 30 17:11:01 crc kubenswrapper[4875]: I0130 17:11:01.398794 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfdj6" event={"ID":"5155590d-0df8-4bbf-8010-237a976cc0e9","Type":"ContainerStarted","Data":"429097ee6bc8ab06cf4de1b98c698aa2309a46260247ac039fc33ff84b181505"} Jan 30 17:11:02 crc kubenswrapper[4875]: I0130 17:11:02.406059 4875 generic.go:334] "Generic (PLEG): container finished" podID="5155590d-0df8-4bbf-8010-237a976cc0e9" containerID="7666eed00d013f5d21eb9bec5993826524c7a9fe4389c6d095fe134e16326e0a" exitCode=0 Jan 30 17:11:02 crc kubenswrapper[4875]: I0130 17:11:02.406147 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfdj6" event={"ID":"5155590d-0df8-4bbf-8010-237a976cc0e9","Type":"ContainerDied","Data":"7666eed00d013f5d21eb9bec5993826524c7a9fe4389c6d095fe134e16326e0a"} Jan 30 17:11:03 crc kubenswrapper[4875]: I0130 17:11:03.414058 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfdj6" event={"ID":"5155590d-0df8-4bbf-8010-237a976cc0e9","Type":"ContainerStarted","Data":"fde009239b80ebb365dbb77159f6056a09d228efff1c6095f4d92e1d6e5a723d"} Jan 30 17:11:03 crc kubenswrapper[4875]: I0130 17:11:03.434227 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nfdj6" podStartSLOduration=2.673030062 podStartE2EDuration="4.434205968s" podCreationTimestamp="2026-01-30 17:10:59 +0000 UTC" firstStartedPulling="2026-01-30 17:11:01.400083593 +0000 UTC m=+871.947446976" lastFinishedPulling="2026-01-30 17:11:03.161259509 +0000 UTC m=+873.708622882" observedRunningTime="2026-01-30 17:11:03.430121057 +0000 UTC m=+873.977484440" watchObservedRunningTime="2026-01-30 17:11:03.434205968 +0000 UTC m=+873.981569351" Jan 30 17:11:06 crc kubenswrapper[4875]: I0130 17:11:06.394134 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-frg6k" Jan 30 17:11:06 crc kubenswrapper[4875]: I0130 17:11:06.630931 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2" Jan 30 17:11:08 crc kubenswrapper[4875]: I0130 17:11:08.824912 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:11:08 crc kubenswrapper[4875]: I0130 17:11:08.880849 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-scp96"] Jan 30 17:11:09 crc kubenswrapper[4875]: I0130 17:11:09.447723 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-scp96" podUID="d416aef0-20fc-4419-9662-6e7933e12684" containerName="registry-server" containerID="cri-o://40f346cba46d8213f44a78239af0cae3f20ae9ac42be073663e3747bb3e27b12" gracePeriod=2 Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.100492 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.100854 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.146915 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.319762 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.322048 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s6ck\" (UniqueName: \"kubernetes.io/projected/d416aef0-20fc-4419-9662-6e7933e12684-kube-api-access-9s6ck\") pod \"d416aef0-20fc-4419-9662-6e7933e12684\" (UID: \"d416aef0-20fc-4419-9662-6e7933e12684\") " Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.322103 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d416aef0-20fc-4419-9662-6e7933e12684-catalog-content\") pod \"d416aef0-20fc-4419-9662-6e7933e12684\" (UID: \"d416aef0-20fc-4419-9662-6e7933e12684\") " Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.322198 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d416aef0-20fc-4419-9662-6e7933e12684-utilities\") pod \"d416aef0-20fc-4419-9662-6e7933e12684\" (UID: \"d416aef0-20fc-4419-9662-6e7933e12684\") " Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.323610 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d416aef0-20fc-4419-9662-6e7933e12684-utilities" (OuterVolumeSpecName: "utilities") pod "d416aef0-20fc-4419-9662-6e7933e12684" (UID: "d416aef0-20fc-4419-9662-6e7933e12684"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.331805 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d416aef0-20fc-4419-9662-6e7933e12684-kube-api-access-9s6ck" (OuterVolumeSpecName: "kube-api-access-9s6ck") pod "d416aef0-20fc-4419-9662-6e7933e12684" (UID: "d416aef0-20fc-4419-9662-6e7933e12684"). InnerVolumeSpecName "kube-api-access-9s6ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.376750 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d416aef0-20fc-4419-9662-6e7933e12684-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d416aef0-20fc-4419-9662-6e7933e12684" (UID: "d416aef0-20fc-4419-9662-6e7933e12684"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.423519 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s6ck\" (UniqueName: \"kubernetes.io/projected/d416aef0-20fc-4419-9662-6e7933e12684-kube-api-access-9s6ck\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.423553 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d416aef0-20fc-4419-9662-6e7933e12684-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.423563 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d416aef0-20fc-4419-9662-6e7933e12684-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.455219 4875 generic.go:334] "Generic (PLEG): container finished" podID="d416aef0-20fc-4419-9662-6e7933e12684" containerID="40f346cba46d8213f44a78239af0cae3f20ae9ac42be073663e3747bb3e27b12" exitCode=0 Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.455298 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scp96" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.455311 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scp96" event={"ID":"d416aef0-20fc-4419-9662-6e7933e12684","Type":"ContainerDied","Data":"40f346cba46d8213f44a78239af0cae3f20ae9ac42be073663e3747bb3e27b12"} Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.455351 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scp96" event={"ID":"d416aef0-20fc-4419-9662-6e7933e12684","Type":"ContainerDied","Data":"a431031739a0660e71070435f6cde3b70ccae931e3035c79a0f3417d4a29a56b"} Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.455370 4875 scope.go:117] "RemoveContainer" containerID="40f346cba46d8213f44a78239af0cae3f20ae9ac42be073663e3747bb3e27b12" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.493203 4875 scope.go:117] "RemoveContainer" containerID="2dd18e0951aa6bff5c354933c4d1f8009c842739ae73fe37d9e5e66c8adeee52" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.511211 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.511865 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-scp96"] Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.518298 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-scp96"] Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.533635 4875 scope.go:117] "RemoveContainer" containerID="a7838fe7347df0343dc24453b4dfbf4d2f9782d8e3d2cf152c5f364badfda847" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.558596 4875 scope.go:117] "RemoveContainer" containerID="40f346cba46d8213f44a78239af0cae3f20ae9ac42be073663e3747bb3e27b12" Jan 30 17:11:10 crc kubenswrapper[4875]: E0130 17:11:10.559015 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40f346cba46d8213f44a78239af0cae3f20ae9ac42be073663e3747bb3e27b12\": container with ID starting with 40f346cba46d8213f44a78239af0cae3f20ae9ac42be073663e3747bb3e27b12 not found: ID does not exist" containerID="40f346cba46d8213f44a78239af0cae3f20ae9ac42be073663e3747bb3e27b12" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.559061 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f346cba46d8213f44a78239af0cae3f20ae9ac42be073663e3747bb3e27b12"} err="failed to get container status \"40f346cba46d8213f44a78239af0cae3f20ae9ac42be073663e3747bb3e27b12\": rpc error: code = NotFound desc = could not find container \"40f346cba46d8213f44a78239af0cae3f20ae9ac42be073663e3747bb3e27b12\": container with ID starting with 40f346cba46d8213f44a78239af0cae3f20ae9ac42be073663e3747bb3e27b12 not found: ID does not exist" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.559094 4875 scope.go:117] "RemoveContainer" containerID="2dd18e0951aa6bff5c354933c4d1f8009c842739ae73fe37d9e5e66c8adeee52" Jan 30 17:11:10 crc kubenswrapper[4875]: E0130 17:11:10.559453 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dd18e0951aa6bff5c354933c4d1f8009c842739ae73fe37d9e5e66c8adeee52\": container with ID starting with 2dd18e0951aa6bff5c354933c4d1f8009c842739ae73fe37d9e5e66c8adeee52 not found: ID does not exist" containerID="2dd18e0951aa6bff5c354933c4d1f8009c842739ae73fe37d9e5e66c8adeee52" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.559481 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dd18e0951aa6bff5c354933c4d1f8009c842739ae73fe37d9e5e66c8adeee52"} err="failed to get container status \"2dd18e0951aa6bff5c354933c4d1f8009c842739ae73fe37d9e5e66c8adeee52\": rpc error: code = NotFound desc = could not find container \"2dd18e0951aa6bff5c354933c4d1f8009c842739ae73fe37d9e5e66c8adeee52\": container with ID starting with 2dd18e0951aa6bff5c354933c4d1f8009c842739ae73fe37d9e5e66c8adeee52 not found: ID does not exist" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.559502 4875 scope.go:117] "RemoveContainer" containerID="a7838fe7347df0343dc24453b4dfbf4d2f9782d8e3d2cf152c5f364badfda847" Jan 30 17:11:10 crc kubenswrapper[4875]: E0130 17:11:10.559848 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7838fe7347df0343dc24453b4dfbf4d2f9782d8e3d2cf152c5f364badfda847\": container with ID starting with a7838fe7347df0343dc24453b4dfbf4d2f9782d8e3d2cf152c5f364badfda847 not found: ID does not exist" containerID="a7838fe7347df0343dc24453b4dfbf4d2f9782d8e3d2cf152c5f364badfda847" Jan 30 17:11:10 crc kubenswrapper[4875]: I0130 17:11:10.559881 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7838fe7347df0343dc24453b4dfbf4d2f9782d8e3d2cf152c5f364badfda847"} err="failed to get container status \"a7838fe7347df0343dc24453b4dfbf4d2f9782d8e3d2cf152c5f364badfda847\": rpc error: code = NotFound desc = could not find container \"a7838fe7347df0343dc24453b4dfbf4d2f9782d8e3d2cf152c5f364badfda847\": container with ID starting with a7838fe7347df0343dc24453b4dfbf4d2f9782d8e3d2cf152c5f364badfda847 not found: ID does not exist" Jan 30 17:11:12 crc kubenswrapper[4875]: I0130 17:11:12.143995 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d416aef0-20fc-4419-9662-6e7933e12684" path="/var/lib/kubelet/pods/d416aef0-20fc-4419-9662-6e7933e12684/volumes" Jan 30 17:11:12 crc kubenswrapper[4875]: I0130 17:11:12.874656 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfdj6"] Jan 30 17:11:12 crc kubenswrapper[4875]: I0130 17:11:12.875135 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nfdj6" podUID="5155590d-0df8-4bbf-8010-237a976cc0e9" containerName="registry-server" containerID="cri-o://fde009239b80ebb365dbb77159f6056a09d228efff1c6095f4d92e1d6e5a723d" gracePeriod=2 Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.486430 4875 generic.go:334] "Generic (PLEG): container finished" podID="5155590d-0df8-4bbf-8010-237a976cc0e9" containerID="fde009239b80ebb365dbb77159f6056a09d228efff1c6095f4d92e1d6e5a723d" exitCode=0 Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.486518 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfdj6" event={"ID":"5155590d-0df8-4bbf-8010-237a976cc0e9","Type":"ContainerDied","Data":"fde009239b80ebb365dbb77159f6056a09d228efff1c6095f4d92e1d6e5a723d"} Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.486724 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfdj6" event={"ID":"5155590d-0df8-4bbf-8010-237a976cc0e9","Type":"ContainerDied","Data":"429097ee6bc8ab06cf4de1b98c698aa2309a46260247ac039fc33ff84b181505"} Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.486740 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="429097ee6bc8ab06cf4de1b98c698aa2309a46260247ac039fc33ff84b181505" Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.505431 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.584457 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5155590d-0df8-4bbf-8010-237a976cc0e9-catalog-content\") pod \"5155590d-0df8-4bbf-8010-237a976cc0e9\" (UID: \"5155590d-0df8-4bbf-8010-237a976cc0e9\") " Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.584515 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pbh2\" (UniqueName: \"kubernetes.io/projected/5155590d-0df8-4bbf-8010-237a976cc0e9-kube-api-access-5pbh2\") pod \"5155590d-0df8-4bbf-8010-237a976cc0e9\" (UID: \"5155590d-0df8-4bbf-8010-237a976cc0e9\") " Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.584535 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5155590d-0df8-4bbf-8010-237a976cc0e9-utilities\") pod \"5155590d-0df8-4bbf-8010-237a976cc0e9\" (UID: \"5155590d-0df8-4bbf-8010-237a976cc0e9\") " Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.585779 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5155590d-0df8-4bbf-8010-237a976cc0e9-utilities" (OuterVolumeSpecName: "utilities") pod "5155590d-0df8-4bbf-8010-237a976cc0e9" (UID: "5155590d-0df8-4bbf-8010-237a976cc0e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.597808 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5155590d-0df8-4bbf-8010-237a976cc0e9-kube-api-access-5pbh2" (OuterVolumeSpecName: "kube-api-access-5pbh2") pod "5155590d-0df8-4bbf-8010-237a976cc0e9" (UID: "5155590d-0df8-4bbf-8010-237a976cc0e9"). InnerVolumeSpecName "kube-api-access-5pbh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.616899 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5155590d-0df8-4bbf-8010-237a976cc0e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5155590d-0df8-4bbf-8010-237a976cc0e9" (UID: "5155590d-0df8-4bbf-8010-237a976cc0e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.685920 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5155590d-0df8-4bbf-8010-237a976cc0e9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.685960 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pbh2\" (UniqueName: \"kubernetes.io/projected/5155590d-0df8-4bbf-8010-237a976cc0e9-kube-api-access-5pbh2\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:14 crc kubenswrapper[4875]: I0130 17:11:14.685976 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5155590d-0df8-4bbf-8010-237a976cc0e9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.491548 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfdj6" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.520834 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfdj6"] Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.531347 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfdj6"] Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.817893 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 30 17:11:15 crc kubenswrapper[4875]: E0130 17:11:15.818160 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d416aef0-20fc-4419-9662-6e7933e12684" containerName="registry-server" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.818179 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="d416aef0-20fc-4419-9662-6e7933e12684" containerName="registry-server" Jan 30 17:11:15 crc kubenswrapper[4875]: E0130 17:11:15.818201 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d416aef0-20fc-4419-9662-6e7933e12684" containerName="extract-content" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.818208 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="d416aef0-20fc-4419-9662-6e7933e12684" containerName="extract-content" Jan 30 17:11:15 crc kubenswrapper[4875]: E0130 17:11:15.818218 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5155590d-0df8-4bbf-8010-237a976cc0e9" containerName="registry-server" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.818223 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5155590d-0df8-4bbf-8010-237a976cc0e9" containerName="registry-server" Jan 30 17:11:15 crc kubenswrapper[4875]: E0130 17:11:15.818234 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5155590d-0df8-4bbf-8010-237a976cc0e9" containerName="extract-utilities" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.818240 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5155590d-0df8-4bbf-8010-237a976cc0e9" containerName="extract-utilities" Jan 30 17:11:15 crc kubenswrapper[4875]: E0130 17:11:15.818253 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5155590d-0df8-4bbf-8010-237a976cc0e9" containerName="extract-content" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.818259 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5155590d-0df8-4bbf-8010-237a976cc0e9" containerName="extract-content" Jan 30 17:11:15 crc kubenswrapper[4875]: E0130 17:11:15.818271 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d416aef0-20fc-4419-9662-6e7933e12684" containerName="extract-utilities" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.818277 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="d416aef0-20fc-4419-9662-6e7933e12684" containerName="extract-utilities" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.818431 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="d416aef0-20fc-4419-9662-6e7933e12684" containerName="registry-server" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.818443 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5155590d-0df8-4bbf-8010-237a976cc0e9" containerName="registry-server" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.819149 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.823679 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-erlang-cookie" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.823784 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-default-user" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.823822 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"kube-root-ca.crt" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.825660 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-plugins-conf" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.826165 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-server-dockercfg-g4mkc" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.827179 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-server-conf" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.829816 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openshift-service-ca.crt" Jan 30 17:11:15 crc kubenswrapper[4875]: I0130 17:11:15.832929 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.002391 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e75a0606-ea82-4ab9-8245-feb3105a23ba-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.002440 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjfhc\" (UniqueName: \"kubernetes.io/projected/e75a0606-ea82-4ab9-8245-feb3105a23ba-kube-api-access-hjfhc\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.002481 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e75a0606-ea82-4ab9-8245-feb3105a23ba-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.002546 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4a5fe458-dbf8-43ad-aaa9-0cf6fc493057\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5fe458-dbf8-43ad-aaa9-0cf6fc493057\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.002612 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e75a0606-ea82-4ab9-8245-feb3105a23ba-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.002653 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e75a0606-ea82-4ab9-8245-feb3105a23ba-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.002673 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e75a0606-ea82-4ab9-8245-feb3105a23ba-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.002706 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e75a0606-ea82-4ab9-8245-feb3105a23ba-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.002727 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e75a0606-ea82-4ab9-8245-feb3105a23ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.047573 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.048632 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.050089 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-plugins-conf" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.050615 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-erlang-cookie" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.050872 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-dockercfg-xjztn" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.050887 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-conf" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.050884 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-default-user" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.063377 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.104496 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e75a0606-ea82-4ab9-8245-feb3105a23ba-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.104547 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjfhc\" (UniqueName: \"kubernetes.io/projected/e75a0606-ea82-4ab9-8245-feb3105a23ba-kube-api-access-hjfhc\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.104644 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e75a0606-ea82-4ab9-8245-feb3105a23ba-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.104708 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4a5fe458-dbf8-43ad-aaa9-0cf6fc493057\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5fe458-dbf8-43ad-aaa9-0cf6fc493057\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.104751 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e75a0606-ea82-4ab9-8245-feb3105a23ba-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.104787 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e75a0606-ea82-4ab9-8245-feb3105a23ba-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.104806 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e75a0606-ea82-4ab9-8245-feb3105a23ba-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.104831 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e75a0606-ea82-4ab9-8245-feb3105a23ba-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.104854 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e75a0606-ea82-4ab9-8245-feb3105a23ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.105810 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e75a0606-ea82-4ab9-8245-feb3105a23ba-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.106742 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e75a0606-ea82-4ab9-8245-feb3105a23ba-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.106962 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e75a0606-ea82-4ab9-8245-feb3105a23ba-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.107151 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e75a0606-ea82-4ab9-8245-feb3105a23ba-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.108562 4875 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.109047 4875 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4a5fe458-dbf8-43ad-aaa9-0cf6fc493057\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5fe458-dbf8-43ad-aaa9-0cf6fc493057\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/88d74903249414f5622a2792dc48ed449b217e59deca7f6176995c7bea9cea84/globalmount\"" pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.109493 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e75a0606-ea82-4ab9-8245-feb3105a23ba-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.110382 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e75a0606-ea82-4ab9-8245-feb3105a23ba-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.111688 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e75a0606-ea82-4ab9-8245-feb3105a23ba-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.123729 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjfhc\" (UniqueName: \"kubernetes.io/projected/e75a0606-ea82-4ab9-8245-feb3105a23ba-kube-api-access-hjfhc\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.140548 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4a5fe458-dbf8-43ad-aaa9-0cf6fc493057\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4a5fe458-dbf8-43ad-aaa9-0cf6fc493057\") pod \"rabbitmq-server-0\" (UID: \"e75a0606-ea82-4ab9-8245-feb3105a23ba\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.146335 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5155590d-0df8-4bbf-8010-237a976cc0e9" path="/var/lib/kubelet/pods/5155590d-0df8-4bbf-8010-237a976cc0e9/volumes" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.206604 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d4b13af-d4ec-458c-b3a9-e060171110f6-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.206864 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9dgt\" (UniqueName: \"kubernetes.io/projected/2d4b13af-d4ec-458c-b3a9-e060171110f6-kube-api-access-n9dgt\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.206989 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d4b13af-d4ec-458c-b3a9-e060171110f6-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.207103 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d4b13af-d4ec-458c-b3a9-e060171110f6-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.207179 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d4b13af-d4ec-458c-b3a9-e060171110f6-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.207299 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-221e54a3-c4f1-4a81-8fae-32f167610064\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-221e54a3-c4f1-4a81-8fae-32f167610064\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.207376 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d4b13af-d4ec-458c-b3a9-e060171110f6-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.207456 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d4b13af-d4ec-458c-b3a9-e060171110f6-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.207551 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d4b13af-d4ec-458c-b3a9-e060171110f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.301028 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.302246 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.303907 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-default-user" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.305468 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-erlang-cookie" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.305608 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-server-conf" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.305716 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-server-dockercfg-t8rpv" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.305832 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-plugins-conf" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.309328 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d4b13af-d4ec-458c-b3a9-e060171110f6-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.309362 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d4b13af-d4ec-458c-b3a9-e060171110f6-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.309401 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-221e54a3-c4f1-4a81-8fae-32f167610064\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-221e54a3-c4f1-4a81-8fae-32f167610064\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.309423 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d4b13af-d4ec-458c-b3a9-e060171110f6-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.309445 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d4b13af-d4ec-458c-b3a9-e060171110f6-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.309464 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d4b13af-d4ec-458c-b3a9-e060171110f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.309509 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d4b13af-d4ec-458c-b3a9-e060171110f6-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.309532 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9dgt\" (UniqueName: \"kubernetes.io/projected/2d4b13af-d4ec-458c-b3a9-e060171110f6-kube-api-access-n9dgt\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.309550 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d4b13af-d4ec-458c-b3a9-e060171110f6-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.310398 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d4b13af-d4ec-458c-b3a9-e060171110f6-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.312155 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d4b13af-d4ec-458c-b3a9-e060171110f6-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.315937 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d4b13af-d4ec-458c-b3a9-e060171110f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.315997 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d4b13af-d4ec-458c-b3a9-e060171110f6-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.317083 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d4b13af-d4ec-458c-b3a9-e060171110f6-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.317357 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.320087 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d4b13af-d4ec-458c-b3a9-e060171110f6-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.321702 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d4b13af-d4ec-458c-b3a9-e060171110f6-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.337604 4875 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.337960 4875 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-221e54a3-c4f1-4a81-8fae-32f167610064\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-221e54a3-c4f1-4a81-8fae-32f167610064\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5447f894084ada3976c6b9619c2a60c442df161c11a0758da40095d8646a2e5a/globalmount\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.351502 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9dgt\" (UniqueName: \"kubernetes.io/projected/2d4b13af-d4ec-458c-b3a9-e060171110f6-kube-api-access-n9dgt\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.381917 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-221e54a3-c4f1-4a81-8fae-32f167610064\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-221e54a3-c4f1-4a81-8fae-32f167610064\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"2d4b13af-d4ec-458c-b3a9-e060171110f6\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.410550 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c502d3c2-3d11-433f-9a56-8a77ff8b746e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c502d3c2-3d11-433f-9a56-8a77ff8b746e\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.410606 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6ee4eec-358c-45f7-9b1a-143de69b2929-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.410632 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6ee4eec-358c-45f7-9b1a-143de69b2929-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.410658 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6ee4eec-358c-45f7-9b1a-143de69b2929-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.410688 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6ee4eec-358c-45f7-9b1a-143de69b2929-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.410709 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6ee4eec-358c-45f7-9b1a-143de69b2929-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.410727 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6ee4eec-358c-45f7-9b1a-143de69b2929-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.411308 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6ee4eec-358c-45f7-9b1a-143de69b2929-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.411371 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp7w9\" (UniqueName: \"kubernetes.io/projected/b6ee4eec-358c-45f7-9b1a-143de69b2929-kube-api-access-zp7w9\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.439220 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.512520 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c502d3c2-3d11-433f-9a56-8a77ff8b746e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c502d3c2-3d11-433f-9a56-8a77ff8b746e\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.512578 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6ee4eec-358c-45f7-9b1a-143de69b2929-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.512640 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6ee4eec-358c-45f7-9b1a-143de69b2929-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.512676 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6ee4eec-358c-45f7-9b1a-143de69b2929-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.512719 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6ee4eec-358c-45f7-9b1a-143de69b2929-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.512753 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6ee4eec-358c-45f7-9b1a-143de69b2929-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.512779 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6ee4eec-358c-45f7-9b1a-143de69b2929-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.512812 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6ee4eec-358c-45f7-9b1a-143de69b2929-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.512844 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp7w9\" (UniqueName: \"kubernetes.io/projected/b6ee4eec-358c-45f7-9b1a-143de69b2929-kube-api-access-zp7w9\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.513679 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6ee4eec-358c-45f7-9b1a-143de69b2929-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.515087 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6ee4eec-358c-45f7-9b1a-143de69b2929-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.515513 4875 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.515537 4875 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c502d3c2-3d11-433f-9a56-8a77ff8b746e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c502d3c2-3d11-433f-9a56-8a77ff8b746e\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dd4be62acf32ea3078a33048b9afbdd689cebda07cf355aa6ff58871418f09c3/globalmount\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.517351 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6ee4eec-358c-45f7-9b1a-143de69b2929-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.517493 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6ee4eec-358c-45f7-9b1a-143de69b2929-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.518660 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6ee4eec-358c-45f7-9b1a-143de69b2929-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.519936 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6ee4eec-358c-45f7-9b1a-143de69b2929-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.522371 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6ee4eec-358c-45f7-9b1a-143de69b2929-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.530672 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp7w9\" (UniqueName: \"kubernetes.io/projected/b6ee4eec-358c-45f7-9b1a-143de69b2929-kube-api-access-zp7w9\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.548390 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c502d3c2-3d11-433f-9a56-8a77ff8b746e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c502d3c2-3d11-433f-9a56-8a77ff8b746e\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6ee4eec-358c-45f7-9b1a-143de69b2929\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.662763 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.704914 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.861342 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 30 17:11:16 crc kubenswrapper[4875]: W0130 17:11:16.868106 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode75a0606_ea82_4ab9_8245_feb3105a23ba.slice/crio-08e6e25231f30de810b55f4c879d7876e6225dc118d798777afe93a0385ef4da WatchSource:0}: Error finding container 08e6e25231f30de810b55f4c879d7876e6225dc118d798777afe93a0385ef4da: Status 404 returned error can't find the container with id 08e6e25231f30de810b55f4c879d7876e6225dc118d798777afe93a0385ef4da Jan 30 17:11:16 crc kubenswrapper[4875]: I0130 17:11:16.955369 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 30 17:11:16 crc kubenswrapper[4875]: W0130 17:11:16.962346 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6ee4eec_358c_45f7_9b1a_143de69b2929.slice/crio-aaa6379c5207f44b8c0d534efade81da73663951bf5d959b701e5d5152c6bc94 WatchSource:0}: Error finding container aaa6379c5207f44b8c0d534efade81da73663951bf5d959b701e5d5152c6bc94: Status 404 returned error can't find the container with id aaa6379c5207f44b8c0d534efade81da73663951bf5d959b701e5d5152c6bc94 Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.085443 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 30 17:11:17 crc kubenswrapper[4875]: W0130 17:11:17.098245 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d4b13af_d4ec_458c_b3a9_e060171110f6.slice/crio-836914a66779bd63fe6401f4b4241c43cf5757d2ebf7ad80ad1166786edd24df WatchSource:0}: Error finding container 836914a66779bd63fe6401f4b4241c43cf5757d2ebf7ad80ad1166786edd24df: Status 404 returned error can't find the container with id 836914a66779bd63fe6401f4b4241c43cf5757d2ebf7ad80ad1166786edd24df Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.127338 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.133375 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.138875 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config-data" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.139193 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-scripts" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.139269 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-svc" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.139735 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-dockercfg-sfpk5" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.159270 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.159483 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"combined-ca-bundle" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.227342 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-31b9f2d1-5636-4487-acc2-30acfa6b2498\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-31b9f2d1-5636-4487-acc2-30acfa6b2498\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.227417 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hclc8\" (UniqueName: \"kubernetes.io/projected/2651f38f-c3ae-4970-ab34-7b9540d5aa24-kube-api-access-hclc8\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.227466 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2651f38f-c3ae-4970-ab34-7b9540d5aa24-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.227510 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2651f38f-c3ae-4970-ab34-7b9540d5aa24-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.227554 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2651f38f-c3ae-4970-ab34-7b9540d5aa24-kolla-config\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.227622 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2651f38f-c3ae-4970-ab34-7b9540d5aa24-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.227646 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2651f38f-c3ae-4970-ab34-7b9540d5aa24-config-data-default\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.227669 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2651f38f-c3ae-4970-ab34-7b9540d5aa24-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.328603 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2651f38f-c3ae-4970-ab34-7b9540d5aa24-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.328872 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2651f38f-c3ae-4970-ab34-7b9540d5aa24-config-data-default\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.328961 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2651f38f-c3ae-4970-ab34-7b9540d5aa24-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.329052 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-31b9f2d1-5636-4487-acc2-30acfa6b2498\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-31b9f2d1-5636-4487-acc2-30acfa6b2498\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.329137 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hclc8\" (UniqueName: \"kubernetes.io/projected/2651f38f-c3ae-4970-ab34-7b9540d5aa24-kube-api-access-hclc8\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.329249 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2651f38f-c3ae-4970-ab34-7b9540d5aa24-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.329353 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2651f38f-c3ae-4970-ab34-7b9540d5aa24-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.329473 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2651f38f-c3ae-4970-ab34-7b9540d5aa24-kolla-config\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.329523 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2651f38f-c3ae-4970-ab34-7b9540d5aa24-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.330144 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2651f38f-c3ae-4970-ab34-7b9540d5aa24-config-data-default\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.330454 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2651f38f-c3ae-4970-ab34-7b9540d5aa24-kolla-config\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.330766 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2651f38f-c3ae-4970-ab34-7b9540d5aa24-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.333026 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2651f38f-c3ae-4970-ab34-7b9540d5aa24-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.336903 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2651f38f-c3ae-4970-ab34-7b9540d5aa24-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.357038 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hclc8\" (UniqueName: \"kubernetes.io/projected/2651f38f-c3ae-4970-ab34-7b9540d5aa24-kube-api-access-hclc8\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.390918 4875 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.390962 4875 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-31b9f2d1-5636-4487-acc2-30acfa6b2498\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-31b9f2d1-5636-4487-acc2-30acfa6b2498\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f895121c1e217c5894cd6a13e90faea9ff72f530fe71f05d2017bd327be081c3/globalmount\"" pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.451483 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.454301 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.456394 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"memcached-config-data" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.462802 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"memcached-memcached-dockercfg-656ls" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.466073 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.498613 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-31b9f2d1-5636-4487-acc2-30acfa6b2498\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-31b9f2d1-5636-4487-acc2-30acfa6b2498\") pod \"openstack-galera-0\" (UID: \"2651f38f-c3ae-4970-ab34-7b9540d5aa24\") " pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.507561 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"b6ee4eec-358c-45f7-9b1a-143de69b2929","Type":"ContainerStarted","Data":"aaa6379c5207f44b8c0d534efade81da73663951bf5d959b701e5d5152c6bc94"} Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.512819 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"2d4b13af-d4ec-458c-b3a9-e060171110f6","Type":"ContainerStarted","Data":"836914a66779bd63fe6401f4b4241c43cf5757d2ebf7ad80ad1166786edd24df"} Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.515525 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"e75a0606-ea82-4ab9-8245-feb3105a23ba","Type":"ContainerStarted","Data":"08e6e25231f30de810b55f4c879d7876e6225dc118d798777afe93a0385ef4da"} Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.531945 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vfjc\" (UniqueName: \"kubernetes.io/projected/e387e78d-25ab-454b-9b66-d2cc13abe676-kube-api-access-4vfjc\") pod \"memcached-0\" (UID: \"e387e78d-25ab-454b-9b66-d2cc13abe676\") " pod="nova-kuttl-default/memcached-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.532333 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e387e78d-25ab-454b-9b66-d2cc13abe676-config-data\") pod \"memcached-0\" (UID: \"e387e78d-25ab-454b-9b66-d2cc13abe676\") " pod="nova-kuttl-default/memcached-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.532558 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e387e78d-25ab-454b-9b66-d2cc13abe676-kolla-config\") pod \"memcached-0\" (UID: \"e387e78d-25ab-454b-9b66-d2cc13abe676\") " pod="nova-kuttl-default/memcached-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.634198 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e387e78d-25ab-454b-9b66-d2cc13abe676-config-data\") pod \"memcached-0\" (UID: \"e387e78d-25ab-454b-9b66-d2cc13abe676\") " pod="nova-kuttl-default/memcached-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.634286 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e387e78d-25ab-454b-9b66-d2cc13abe676-kolla-config\") pod \"memcached-0\" (UID: \"e387e78d-25ab-454b-9b66-d2cc13abe676\") " pod="nova-kuttl-default/memcached-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.634315 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vfjc\" (UniqueName: \"kubernetes.io/projected/e387e78d-25ab-454b-9b66-d2cc13abe676-kube-api-access-4vfjc\") pod \"memcached-0\" (UID: \"e387e78d-25ab-454b-9b66-d2cc13abe676\") " pod="nova-kuttl-default/memcached-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.635024 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e387e78d-25ab-454b-9b66-d2cc13abe676-config-data\") pod \"memcached-0\" (UID: \"e387e78d-25ab-454b-9b66-d2cc13abe676\") " pod="nova-kuttl-default/memcached-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.635497 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e387e78d-25ab-454b-9b66-d2cc13abe676-kolla-config\") pod \"memcached-0\" (UID: \"e387e78d-25ab-454b-9b66-d2cc13abe676\") " pod="nova-kuttl-default/memcached-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.653296 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vfjc\" (UniqueName: \"kubernetes.io/projected/e387e78d-25ab-454b-9b66-d2cc13abe676-kube-api-access-4vfjc\") pod \"memcached-0\" (UID: \"e387e78d-25ab-454b-9b66-d2cc13abe676\") " pod="nova-kuttl-default/memcached-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.762991 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.775526 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 30 17:11:17 crc kubenswrapper[4875]: I0130 17:11:17.990382 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.252324 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 30 17:11:18 crc kubenswrapper[4875]: W0130 17:11:18.260144 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2651f38f_c3ae_4970_ab34_7b9540d5aa24.slice/crio-3a07ad54c37c328e77f982e4bb7aaef0adda068e56379f21e9d7de2e4bf17320 WatchSource:0}: Error finding container 3a07ad54c37c328e77f982e4bb7aaef0adda068e56379f21e9d7de2e4bf17320: Status 404 returned error can't find the container with id 3a07ad54c37c328e77f982e4bb7aaef0adda068e56379f21e9d7de2e4bf17320 Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.526190 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"e387e78d-25ab-454b-9b66-d2cc13abe676","Type":"ContainerStarted","Data":"9c80c1d5a7c96d3bf1137bccf11a2c56d1a5287a620cb40cc341729a4a743c3c"} Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.527966 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"2651f38f-c3ae-4970-ab34-7b9540d5aa24","Type":"ContainerStarted","Data":"3a07ad54c37c328e77f982e4bb7aaef0adda068e56379f21e9d7de2e4bf17320"} Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.636716 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.637998 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.644699 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-cell1-svc" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.646090 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-cell1-dockercfg-n8mjx" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.648633 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-scripts" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.648686 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-config-data" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.650907 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.753350 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp7lc\" (UniqueName: \"kubernetes.io/projected/83732f39-75fd-4817-be96-f954dcc5fd96-kube-api-access-jp7lc\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.753433 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/83732f39-75fd-4817-be96-f954dcc5fd96-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.753498 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/83732f39-75fd-4817-be96-f954dcc5fd96-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.753526 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83732f39-75fd-4817-be96-f954dcc5fd96-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.753565 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/83732f39-75fd-4817-be96-f954dcc5fd96-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.753604 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/83732f39-75fd-4817-be96-f954dcc5fd96-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.753669 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7f479643-29fe-474f-9c62-3744316e0b08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f479643-29fe-474f-9c62-3744316e0b08\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.753691 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83732f39-75fd-4817-be96-f954dcc5fd96-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.855496 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/83732f39-75fd-4817-be96-f954dcc5fd96-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.855556 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83732f39-75fd-4817-be96-f954dcc5fd96-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.855605 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/83732f39-75fd-4817-be96-f954dcc5fd96-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.855622 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/83732f39-75fd-4817-be96-f954dcc5fd96-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.855668 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7f479643-29fe-474f-9c62-3744316e0b08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f479643-29fe-474f-9c62-3744316e0b08\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.855685 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83732f39-75fd-4817-be96-f954dcc5fd96-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.855716 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp7lc\" (UniqueName: \"kubernetes.io/projected/83732f39-75fd-4817-be96-f954dcc5fd96-kube-api-access-jp7lc\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.855742 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/83732f39-75fd-4817-be96-f954dcc5fd96-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.856169 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/83732f39-75fd-4817-be96-f954dcc5fd96-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.857124 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/83732f39-75fd-4817-be96-f954dcc5fd96-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.857276 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/83732f39-75fd-4817-be96-f954dcc5fd96-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.857540 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83732f39-75fd-4817-be96-f954dcc5fd96-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.867382 4875 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.867439 4875 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7f479643-29fe-474f-9c62-3744316e0b08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f479643-29fe-474f-9c62-3744316e0b08\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/556e7ae061c261031e9d4c8557f9e7ac02670b7bec4f8a1dee987dd8f6677064/globalmount\"" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.876686 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/83732f39-75fd-4817-be96-f954dcc5fd96-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.877244 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83732f39-75fd-4817-be96-f954dcc5fd96-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.877886 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp7lc\" (UniqueName: \"kubernetes.io/projected/83732f39-75fd-4817-be96-f954dcc5fd96-kube-api-access-jp7lc\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.911230 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7f479643-29fe-474f-9c62-3744316e0b08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f479643-29fe-474f-9c62-3744316e0b08\") pod \"openstack-cell1-galera-0\" (UID: \"83732f39-75fd-4817-be96-f954dcc5fd96\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:18 crc kubenswrapper[4875]: I0130 17:11:18.961032 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:19 crc kubenswrapper[4875]: I0130 17:11:19.381340 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 30 17:11:19 crc kubenswrapper[4875]: W0130 17:11:19.396774 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83732f39_75fd_4817_be96_f954dcc5fd96.slice/crio-3092a7c6aa960f5cd0b80ad4f89f4d48d8b6c7083214a5a6d710c97eef537dec WatchSource:0}: Error finding container 3092a7c6aa960f5cd0b80ad4f89f4d48d8b6c7083214a5a6d710c97eef537dec: Status 404 returned error can't find the container with id 3092a7c6aa960f5cd0b80ad4f89f4d48d8b6c7083214a5a6d710c97eef537dec Jan 30 17:11:19 crc kubenswrapper[4875]: I0130 17:11:19.536775 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"83732f39-75fd-4817-be96-f954dcc5fd96","Type":"ContainerStarted","Data":"3092a7c6aa960f5cd0b80ad4f89f4d48d8b6c7083214a5a6d710c97eef537dec"} Jan 30 17:11:26 crc kubenswrapper[4875]: I0130 17:11:26.287930 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:11:26 crc kubenswrapper[4875]: I0130 17:11:26.289504 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:11:28 crc kubenswrapper[4875]: I0130 17:11:28.641759 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"e387e78d-25ab-454b-9b66-d2cc13abe676","Type":"ContainerStarted","Data":"f4159046b9514af233dfc19756f0ce651f056af8607ff58c7fa67a971e3ea62b"} Jan 30 17:11:28 crc kubenswrapper[4875]: I0130 17:11:28.642551 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/memcached-0" Jan 30 17:11:28 crc kubenswrapper[4875]: I0130 17:11:28.643448 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"2651f38f-c3ae-4970-ab34-7b9540d5aa24","Type":"ContainerStarted","Data":"93b1f41bea1fb2752c2b3b413a1370664e495e6f7129a4079132410900d98b24"} Jan 30 17:11:28 crc kubenswrapper[4875]: I0130 17:11:28.645338 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"83732f39-75fd-4817-be96-f954dcc5fd96","Type":"ContainerStarted","Data":"f16591f5219fdc9a7453c5f8b2c106a51566b584c40eb293f73e2b961b996974"} Jan 30 17:11:28 crc kubenswrapper[4875]: I0130 17:11:28.661581 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/memcached-0" podStartSLOduration=1.380270757 podStartE2EDuration="11.661552222s" podCreationTimestamp="2026-01-30 17:11:17 +0000 UTC" firstStartedPulling="2026-01-30 17:11:17.996749319 +0000 UTC m=+888.544112702" lastFinishedPulling="2026-01-30 17:11:28.278030784 +0000 UTC m=+898.825394167" observedRunningTime="2026-01-30 17:11:28.66119507 +0000 UTC m=+899.208558453" watchObservedRunningTime="2026-01-30 17:11:28.661552222 +0000 UTC m=+899.208915635" Jan 30 17:11:29 crc kubenswrapper[4875]: I0130 17:11:29.654160 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"b6ee4eec-358c-45f7-9b1a-143de69b2929","Type":"ContainerStarted","Data":"db19b1b372af10769f691a279cf83d3ba834652a8b77d5ecefd94d67b753ab5a"} Jan 30 17:11:29 crc kubenswrapper[4875]: I0130 17:11:29.655444 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"2d4b13af-d4ec-458c-b3a9-e060171110f6","Type":"ContainerStarted","Data":"eb30f81307a858e52ea1cfe34c51f09ae1c873df6e9b2455bf69d6a47ae050c9"} Jan 30 17:11:29 crc kubenswrapper[4875]: I0130 17:11:29.656859 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"e75a0606-ea82-4ab9-8245-feb3105a23ba","Type":"ContainerStarted","Data":"989b0195b567cec8a307efba99699345219f928641e0ab411fb8c13c86651c44"} Jan 30 17:11:32 crc kubenswrapper[4875]: I0130 17:11:32.698893 4875 generic.go:334] "Generic (PLEG): container finished" podID="2651f38f-c3ae-4970-ab34-7b9540d5aa24" containerID="93b1f41bea1fb2752c2b3b413a1370664e495e6f7129a4079132410900d98b24" exitCode=0 Jan 30 17:11:32 crc kubenswrapper[4875]: I0130 17:11:32.699124 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"2651f38f-c3ae-4970-ab34-7b9540d5aa24","Type":"ContainerDied","Data":"93b1f41bea1fb2752c2b3b413a1370664e495e6f7129a4079132410900d98b24"} Jan 30 17:11:32 crc kubenswrapper[4875]: I0130 17:11:32.701917 4875 generic.go:334] "Generic (PLEG): container finished" podID="83732f39-75fd-4817-be96-f954dcc5fd96" containerID="f16591f5219fdc9a7453c5f8b2c106a51566b584c40eb293f73e2b961b996974" exitCode=0 Jan 30 17:11:32 crc kubenswrapper[4875]: I0130 17:11:32.701965 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"83732f39-75fd-4817-be96-f954dcc5fd96","Type":"ContainerDied","Data":"f16591f5219fdc9a7453c5f8b2c106a51566b584c40eb293f73e2b961b996974"} Jan 30 17:11:33 crc kubenswrapper[4875]: I0130 17:11:33.710198 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"2651f38f-c3ae-4970-ab34-7b9540d5aa24","Type":"ContainerStarted","Data":"d1454f2985da8d3592fa1f05fa8472c25be62659fa03f6415a995f8f7f582ce3"} Jan 30 17:11:33 crc kubenswrapper[4875]: I0130 17:11:33.712548 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"83732f39-75fd-4817-be96-f954dcc5fd96","Type":"ContainerStarted","Data":"2d467aa06c2c3ca5a3f5e0707e4f73fbb4cb6715681beb49aa9e5c0cd1b14917"} Jan 30 17:11:33 crc kubenswrapper[4875]: I0130 17:11:33.732877 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-galera-0" podStartSLOduration=7.704705538 podStartE2EDuration="17.732855689s" podCreationTimestamp="2026-01-30 17:11:16 +0000 UTC" firstStartedPulling="2026-01-30 17:11:18.263370299 +0000 UTC m=+888.810733682" lastFinishedPulling="2026-01-30 17:11:28.29152045 +0000 UTC m=+898.838883833" observedRunningTime="2026-01-30 17:11:33.730020371 +0000 UTC m=+904.277383754" watchObservedRunningTime="2026-01-30 17:11:33.732855689 +0000 UTC m=+904.280219072" Jan 30 17:11:33 crc kubenswrapper[4875]: I0130 17:11:33.754105 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-cell1-galera-0" podStartSLOduration=7.874400043 podStartE2EDuration="16.754085523s" podCreationTimestamp="2026-01-30 17:11:17 +0000 UTC" firstStartedPulling="2026-01-30 17:11:19.398308493 +0000 UTC m=+889.945671876" lastFinishedPulling="2026-01-30 17:11:28.277993963 +0000 UTC m=+898.825357356" observedRunningTime="2026-01-30 17:11:33.750440657 +0000 UTC m=+904.297804060" watchObservedRunningTime="2026-01-30 17:11:33.754085523 +0000 UTC m=+904.301448906" Jan 30 17:11:37 crc kubenswrapper[4875]: I0130 17:11:37.763404 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:37 crc kubenswrapper[4875]: I0130 17:11:37.764448 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:37 crc kubenswrapper[4875]: I0130 17:11:37.777132 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/memcached-0" Jan 30 17:11:38 crc kubenswrapper[4875]: I0130 17:11:38.433606 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:38 crc kubenswrapper[4875]: I0130 17:11:38.800773 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-galera-0" Jan 30 17:11:38 crc kubenswrapper[4875]: I0130 17:11:38.961280 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:38 crc kubenswrapper[4875]: I0130 17:11:38.961338 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:41 crc kubenswrapper[4875]: I0130 17:11:41.232232 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:41 crc kubenswrapper[4875]: I0130 17:11:41.317309 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 30 17:11:46 crc kubenswrapper[4875]: I0130 17:11:46.212274 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-2hhm8"] Jan 30 17:11:46 crc kubenswrapper[4875]: I0130 17:11:46.213123 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-2hhm8" Jan 30 17:11:46 crc kubenswrapper[4875]: I0130 17:11:46.214734 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 30 17:11:46 crc kubenswrapper[4875]: I0130 17:11:46.224312 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-2hhm8"] Jan 30 17:11:46 crc kubenswrapper[4875]: I0130 17:11:46.320974 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfa8fed6-37a9-43b3-a358-2d4e06a89eb0-operator-scripts\") pod \"root-account-create-update-2hhm8\" (UID: \"cfa8fed6-37a9-43b3-a358-2d4e06a89eb0\") " pod="nova-kuttl-default/root-account-create-update-2hhm8" Jan 30 17:11:46 crc kubenswrapper[4875]: I0130 17:11:46.321035 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cqtq\" (UniqueName: \"kubernetes.io/projected/cfa8fed6-37a9-43b3-a358-2d4e06a89eb0-kube-api-access-2cqtq\") pod \"root-account-create-update-2hhm8\" (UID: \"cfa8fed6-37a9-43b3-a358-2d4e06a89eb0\") " pod="nova-kuttl-default/root-account-create-update-2hhm8" Jan 30 17:11:46 crc kubenswrapper[4875]: I0130 17:11:46.423073 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfa8fed6-37a9-43b3-a358-2d4e06a89eb0-operator-scripts\") pod \"root-account-create-update-2hhm8\" (UID: \"cfa8fed6-37a9-43b3-a358-2d4e06a89eb0\") " pod="nova-kuttl-default/root-account-create-update-2hhm8" Jan 30 17:11:46 crc kubenswrapper[4875]: I0130 17:11:46.423209 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cqtq\" (UniqueName: \"kubernetes.io/projected/cfa8fed6-37a9-43b3-a358-2d4e06a89eb0-kube-api-access-2cqtq\") pod \"root-account-create-update-2hhm8\" (UID: \"cfa8fed6-37a9-43b3-a358-2d4e06a89eb0\") " pod="nova-kuttl-default/root-account-create-update-2hhm8" Jan 30 17:11:46 crc kubenswrapper[4875]: I0130 17:11:46.424159 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfa8fed6-37a9-43b3-a358-2d4e06a89eb0-operator-scripts\") pod \"root-account-create-update-2hhm8\" (UID: \"cfa8fed6-37a9-43b3-a358-2d4e06a89eb0\") " pod="nova-kuttl-default/root-account-create-update-2hhm8" Jan 30 17:11:46 crc kubenswrapper[4875]: I0130 17:11:46.450395 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cqtq\" (UniqueName: \"kubernetes.io/projected/cfa8fed6-37a9-43b3-a358-2d4e06a89eb0-kube-api-access-2cqtq\") pod \"root-account-create-update-2hhm8\" (UID: \"cfa8fed6-37a9-43b3-a358-2d4e06a89eb0\") " pod="nova-kuttl-default/root-account-create-update-2hhm8" Jan 30 17:11:46 crc kubenswrapper[4875]: I0130 17:11:46.532946 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-2hhm8" Jan 30 17:11:47 crc kubenswrapper[4875]: W0130 17:11:47.018446 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfa8fed6_37a9_43b3_a358_2d4e06a89eb0.slice/crio-e208f376810f6c17d1b685479fbcc6c88feee47baf32242e9a90af73efc2c5db WatchSource:0}: Error finding container e208f376810f6c17d1b685479fbcc6c88feee47baf32242e9a90af73efc2c5db: Status 404 returned error can't find the container with id e208f376810f6c17d1b685479fbcc6c88feee47baf32242e9a90af73efc2c5db Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.021958 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-2hhm8"] Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.351172 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-create-sljmk"] Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.352390 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-sljmk" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.359489 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-sljmk"] Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.439224 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cacfe404-1454-466e-8036-b66d7b76ea37-operator-scripts\") pod \"keystone-db-create-sljmk\" (UID: \"cacfe404-1454-466e-8036-b66d7b76ea37\") " pod="nova-kuttl-default/keystone-db-create-sljmk" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.439314 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvcz4\" (UniqueName: \"kubernetes.io/projected/cacfe404-1454-466e-8036-b66d7b76ea37-kube-api-access-pvcz4\") pod \"keystone-db-create-sljmk\" (UID: \"cacfe404-1454-466e-8036-b66d7b76ea37\") " pod="nova-kuttl-default/keystone-db-create-sljmk" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.460549 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-3844-account-create-update-hgdr6"] Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.461841 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.464133 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-db-secret" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.469540 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-3844-account-create-update-hgdr6"] Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.540549 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cacfe404-1454-466e-8036-b66d7b76ea37-operator-scripts\") pod \"keystone-db-create-sljmk\" (UID: \"cacfe404-1454-466e-8036-b66d7b76ea37\") " pod="nova-kuttl-default/keystone-db-create-sljmk" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.540619 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccb0c94f-c080-475d-b3e3-5c48b99f7c1f-operator-scripts\") pod \"keystone-3844-account-create-update-hgdr6\" (UID: \"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f\") " pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.540662 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvcz4\" (UniqueName: \"kubernetes.io/projected/cacfe404-1454-466e-8036-b66d7b76ea37-kube-api-access-pvcz4\") pod \"keystone-db-create-sljmk\" (UID: \"cacfe404-1454-466e-8036-b66d7b76ea37\") " pod="nova-kuttl-default/keystone-db-create-sljmk" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.540727 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnvj7\" (UniqueName: \"kubernetes.io/projected/ccb0c94f-c080-475d-b3e3-5c48b99f7c1f-kube-api-access-dnvj7\") pod \"keystone-3844-account-create-update-hgdr6\" (UID: \"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f\") " pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.541657 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cacfe404-1454-466e-8036-b66d7b76ea37-operator-scripts\") pod \"keystone-db-create-sljmk\" (UID: \"cacfe404-1454-466e-8036-b66d7b76ea37\") " pod="nova-kuttl-default/keystone-db-create-sljmk" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.566362 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvcz4\" (UniqueName: \"kubernetes.io/projected/cacfe404-1454-466e-8036-b66d7b76ea37-kube-api-access-pvcz4\") pod \"keystone-db-create-sljmk\" (UID: \"cacfe404-1454-466e-8036-b66d7b76ea37\") " pod="nova-kuttl-default/keystone-db-create-sljmk" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.642036 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnvj7\" (UniqueName: \"kubernetes.io/projected/ccb0c94f-c080-475d-b3e3-5c48b99f7c1f-kube-api-access-dnvj7\") pod \"keystone-3844-account-create-update-hgdr6\" (UID: \"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f\") " pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.642114 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccb0c94f-c080-475d-b3e3-5c48b99f7c1f-operator-scripts\") pod \"keystone-3844-account-create-update-hgdr6\" (UID: \"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f\") " pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.642824 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccb0c94f-c080-475d-b3e3-5c48b99f7c1f-operator-scripts\") pod \"keystone-3844-account-create-update-hgdr6\" (UID: \"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f\") " pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.656102 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-create-57627"] Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.657217 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-57627" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.667263 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-sljmk" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.672406 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-57627"] Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.681414 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-ec5d-account-create-update-m7bv7"] Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.682479 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.693399 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-ec5d-account-create-update-m7bv7"] Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.696133 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-db-secret" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.696565 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnvj7\" (UniqueName: \"kubernetes.io/projected/ccb0c94f-c080-475d-b3e3-5c48b99f7c1f-kube-api-access-dnvj7\") pod \"keystone-3844-account-create-update-hgdr6\" (UID: \"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f\") " pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.743915 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh2pw\" (UniqueName: \"kubernetes.io/projected/c19db90e-7888-492f-81aa-3109c80be25b-kube-api-access-jh2pw\") pod \"placement-db-create-57627\" (UID: \"c19db90e-7888-492f-81aa-3109c80be25b\") " pod="nova-kuttl-default/placement-db-create-57627" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.743968 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c19db90e-7888-492f-81aa-3109c80be25b-operator-scripts\") pod \"placement-db-create-57627\" (UID: \"c19db90e-7888-492f-81aa-3109c80be25b\") " pod="nova-kuttl-default/placement-db-create-57627" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.744014 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56e2e23d-0edb-4d09-b421-5bb12f185bdd-operator-scripts\") pod \"placement-ec5d-account-create-update-m7bv7\" (UID: \"56e2e23d-0edb-4d09-b421-5bb12f185bdd\") " pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.744163 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrn55\" (UniqueName: \"kubernetes.io/projected/56e2e23d-0edb-4d09-b421-5bb12f185bdd-kube-api-access-hrn55\") pod \"placement-ec5d-account-create-update-m7bv7\" (UID: \"56e2e23d-0edb-4d09-b421-5bb12f185bdd\") " pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.795534 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.810319 4875 generic.go:334] "Generic (PLEG): container finished" podID="cfa8fed6-37a9-43b3-a358-2d4e06a89eb0" containerID="ac01eaeb76ee4b502893775cc7b4fd2a9c426eda11d37e1d3233a97604a95d90" exitCode=0 Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.810356 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-2hhm8" event={"ID":"cfa8fed6-37a9-43b3-a358-2d4e06a89eb0","Type":"ContainerDied","Data":"ac01eaeb76ee4b502893775cc7b4fd2a9c426eda11d37e1d3233a97604a95d90"} Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.810384 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-2hhm8" event={"ID":"cfa8fed6-37a9-43b3-a358-2d4e06a89eb0","Type":"ContainerStarted","Data":"e208f376810f6c17d1b685479fbcc6c88feee47baf32242e9a90af73efc2c5db"} Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.845769 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh2pw\" (UniqueName: \"kubernetes.io/projected/c19db90e-7888-492f-81aa-3109c80be25b-kube-api-access-jh2pw\") pod \"placement-db-create-57627\" (UID: \"c19db90e-7888-492f-81aa-3109c80be25b\") " pod="nova-kuttl-default/placement-db-create-57627" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.845812 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c19db90e-7888-492f-81aa-3109c80be25b-operator-scripts\") pod \"placement-db-create-57627\" (UID: \"c19db90e-7888-492f-81aa-3109c80be25b\") " pod="nova-kuttl-default/placement-db-create-57627" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.845855 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56e2e23d-0edb-4d09-b421-5bb12f185bdd-operator-scripts\") pod \"placement-ec5d-account-create-update-m7bv7\" (UID: \"56e2e23d-0edb-4d09-b421-5bb12f185bdd\") " pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.845892 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrn55\" (UniqueName: \"kubernetes.io/projected/56e2e23d-0edb-4d09-b421-5bb12f185bdd-kube-api-access-hrn55\") pod \"placement-ec5d-account-create-update-m7bv7\" (UID: \"56e2e23d-0edb-4d09-b421-5bb12f185bdd\") " pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.846901 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c19db90e-7888-492f-81aa-3109c80be25b-operator-scripts\") pod \"placement-db-create-57627\" (UID: \"c19db90e-7888-492f-81aa-3109c80be25b\") " pod="nova-kuttl-default/placement-db-create-57627" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.847340 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56e2e23d-0edb-4d09-b421-5bb12f185bdd-operator-scripts\") pod \"placement-ec5d-account-create-update-m7bv7\" (UID: \"56e2e23d-0edb-4d09-b421-5bb12f185bdd\") " pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.866464 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh2pw\" (UniqueName: \"kubernetes.io/projected/c19db90e-7888-492f-81aa-3109c80be25b-kube-api-access-jh2pw\") pod \"placement-db-create-57627\" (UID: \"c19db90e-7888-492f-81aa-3109c80be25b\") " pod="nova-kuttl-default/placement-db-create-57627" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.866937 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrn55\" (UniqueName: \"kubernetes.io/projected/56e2e23d-0edb-4d09-b421-5bb12f185bdd-kube-api-access-hrn55\") pod \"placement-ec5d-account-create-update-m7bv7\" (UID: \"56e2e23d-0edb-4d09-b421-5bb12f185bdd\") " pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" Jan 30 17:11:47 crc kubenswrapper[4875]: I0130 17:11:47.980507 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-57627" Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.056055 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.115228 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-sljmk"] Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.245821 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-3844-account-create-update-hgdr6"] Jan 30 17:11:48 crc kubenswrapper[4875]: W0130 17:11:48.252844 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccb0c94f_c080_475d_b3e3_5c48b99f7c1f.slice/crio-9a9d427514972acad5a4de28072848c48a6b79f666275455b853b5fa2b026b2f WatchSource:0}: Error finding container 9a9d427514972acad5a4de28072848c48a6b79f666275455b853b5fa2b026b2f: Status 404 returned error can't find the container with id 9a9d427514972acad5a4de28072848c48a6b79f666275455b853b5fa2b026b2f Jan 30 17:11:48 crc kubenswrapper[4875]: W0130 17:11:48.389453 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc19db90e_7888_492f_81aa_3109c80be25b.slice/crio-b0321526068a56fd33c059b87a6b6767ee2dc280ac05793d78803a5058aa8898 WatchSource:0}: Error finding container b0321526068a56fd33c059b87a6b6767ee2dc280ac05793d78803a5058aa8898: Status 404 returned error can't find the container with id b0321526068a56fd33c059b87a6b6767ee2dc280ac05793d78803a5058aa8898 Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.389731 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-57627"] Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.518231 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-ec5d-account-create-update-m7bv7"] Jan 30 17:11:48 crc kubenswrapper[4875]: W0130 17:11:48.547392 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56e2e23d_0edb_4d09_b421_5bb12f185bdd.slice/crio-3eaa14e800b28d92e1a2852bfb869d8dc91e2c2bf72751e2e150cc00c8630f4e WatchSource:0}: Error finding container 3eaa14e800b28d92e1a2852bfb869d8dc91e2c2bf72751e2e150cc00c8630f4e: Status 404 returned error can't find the container with id 3eaa14e800b28d92e1a2852bfb869d8dc91e2c2bf72751e2e150cc00c8630f4e Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.818351 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-57627" event={"ID":"c19db90e-7888-492f-81aa-3109c80be25b","Type":"ContainerStarted","Data":"7ccaeca0120987ce77a158bccc5c4d82c8df6516bc785d75df115c81e3a67fa6"} Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.818393 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-57627" event={"ID":"c19db90e-7888-492f-81aa-3109c80be25b","Type":"ContainerStarted","Data":"b0321526068a56fd33c059b87a6b6767ee2dc280ac05793d78803a5058aa8898"} Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.819792 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" event={"ID":"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f","Type":"ContainerStarted","Data":"0022c23ed2e2145d08ef28bf4670ca3497acdab4af3dff7c0d3899d5847337ad"} Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.819841 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" event={"ID":"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f","Type":"ContainerStarted","Data":"9a9d427514972acad5a4de28072848c48a6b79f666275455b853b5fa2b026b2f"} Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.821432 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" event={"ID":"56e2e23d-0edb-4d09-b421-5bb12f185bdd","Type":"ContainerStarted","Data":"6612ce0639e40a40e560924e4d907833dffd286927eddf189ee1f195411aef45"} Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.821492 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" event={"ID":"56e2e23d-0edb-4d09-b421-5bb12f185bdd","Type":"ContainerStarted","Data":"3eaa14e800b28d92e1a2852bfb869d8dc91e2c2bf72751e2e150cc00c8630f4e"} Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.823075 4875 generic.go:334] "Generic (PLEG): container finished" podID="cacfe404-1454-466e-8036-b66d7b76ea37" containerID="cfb3403bee90a75d4b11236e55da52f19abc361193aff4f5c329e9c54dca4e13" exitCode=0 Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.823116 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-sljmk" event={"ID":"cacfe404-1454-466e-8036-b66d7b76ea37","Type":"ContainerDied","Data":"cfb3403bee90a75d4b11236e55da52f19abc361193aff4f5c329e9c54dca4e13"} Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.823146 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-sljmk" event={"ID":"cacfe404-1454-466e-8036-b66d7b76ea37","Type":"ContainerStarted","Data":"c047daf73ea028dd25236ddbe74e2924adc28dfeaeb57f108388dcebc026ec67"} Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.839849 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-db-create-57627" podStartSLOduration=1.8398250360000001 podStartE2EDuration="1.839825036s" podCreationTimestamp="2026-01-30 17:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:11:48.833831799 +0000 UTC m=+919.381195182" watchObservedRunningTime="2026-01-30 17:11:48.839825036 +0000 UTC m=+919.387188419" Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.850489 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" podStartSLOduration=1.8504704840000001 podStartE2EDuration="1.850470484s" podCreationTimestamp="2026-01-30 17:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:11:48.84860818 +0000 UTC m=+919.395971563" watchObservedRunningTime="2026-01-30 17:11:48.850470484 +0000 UTC m=+919.397833867" Jan 30 17:11:48 crc kubenswrapper[4875]: I0130 17:11:48.860656 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" podStartSLOduration=1.860641266 podStartE2EDuration="1.860641266s" podCreationTimestamp="2026-01-30 17:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:11:48.858182101 +0000 UTC m=+919.405545484" watchObservedRunningTime="2026-01-30 17:11:48.860641266 +0000 UTC m=+919.408004639" Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.099371 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-2hhm8" Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.160027 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfa8fed6-37a9-43b3-a358-2d4e06a89eb0-operator-scripts\") pod \"cfa8fed6-37a9-43b3-a358-2d4e06a89eb0\" (UID: \"cfa8fed6-37a9-43b3-a358-2d4e06a89eb0\") " Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.160425 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cqtq\" (UniqueName: \"kubernetes.io/projected/cfa8fed6-37a9-43b3-a358-2d4e06a89eb0-kube-api-access-2cqtq\") pod \"cfa8fed6-37a9-43b3-a358-2d4e06a89eb0\" (UID: \"cfa8fed6-37a9-43b3-a358-2d4e06a89eb0\") " Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.160546 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa8fed6-37a9-43b3-a358-2d4e06a89eb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cfa8fed6-37a9-43b3-a358-2d4e06a89eb0" (UID: "cfa8fed6-37a9-43b3-a358-2d4e06a89eb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.160970 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfa8fed6-37a9-43b3-a358-2d4e06a89eb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.168813 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa8fed6-37a9-43b3-a358-2d4e06a89eb0-kube-api-access-2cqtq" (OuterVolumeSpecName: "kube-api-access-2cqtq") pod "cfa8fed6-37a9-43b3-a358-2d4e06a89eb0" (UID: "cfa8fed6-37a9-43b3-a358-2d4e06a89eb0"). InnerVolumeSpecName "kube-api-access-2cqtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.262402 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cqtq\" (UniqueName: \"kubernetes.io/projected/cfa8fed6-37a9-43b3-a358-2d4e06a89eb0-kube-api-access-2cqtq\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.831016 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-2hhm8" event={"ID":"cfa8fed6-37a9-43b3-a358-2d4e06a89eb0","Type":"ContainerDied","Data":"e208f376810f6c17d1b685479fbcc6c88feee47baf32242e9a90af73efc2c5db"} Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.831296 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e208f376810f6c17d1b685479fbcc6c88feee47baf32242e9a90af73efc2c5db" Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.831042 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-2hhm8" Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.832405 4875 generic.go:334] "Generic (PLEG): container finished" podID="56e2e23d-0edb-4d09-b421-5bb12f185bdd" containerID="6612ce0639e40a40e560924e4d907833dffd286927eddf189ee1f195411aef45" exitCode=0 Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.832467 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" event={"ID":"56e2e23d-0edb-4d09-b421-5bb12f185bdd","Type":"ContainerDied","Data":"6612ce0639e40a40e560924e4d907833dffd286927eddf189ee1f195411aef45"} Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.833825 4875 generic.go:334] "Generic (PLEG): container finished" podID="c19db90e-7888-492f-81aa-3109c80be25b" containerID="7ccaeca0120987ce77a158bccc5c4d82c8df6516bc785d75df115c81e3a67fa6" exitCode=0 Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.833865 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-57627" event={"ID":"c19db90e-7888-492f-81aa-3109c80be25b","Type":"ContainerDied","Data":"7ccaeca0120987ce77a158bccc5c4d82c8df6516bc785d75df115c81e3a67fa6"} Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.836077 4875 generic.go:334] "Generic (PLEG): container finished" podID="ccb0c94f-c080-475d-b3e3-5c48b99f7c1f" containerID="0022c23ed2e2145d08ef28bf4670ca3497acdab4af3dff7c0d3899d5847337ad" exitCode=0 Jan 30 17:11:49 crc kubenswrapper[4875]: I0130 17:11:49.836133 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" event={"ID":"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f","Type":"ContainerDied","Data":"0022c23ed2e2145d08ef28bf4670ca3497acdab4af3dff7c0d3899d5847337ad"} Jan 30 17:11:50 crc kubenswrapper[4875]: I0130 17:11:50.166541 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-sljmk" Jan 30 17:11:50 crc kubenswrapper[4875]: I0130 17:11:50.278140 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvcz4\" (UniqueName: \"kubernetes.io/projected/cacfe404-1454-466e-8036-b66d7b76ea37-kube-api-access-pvcz4\") pod \"cacfe404-1454-466e-8036-b66d7b76ea37\" (UID: \"cacfe404-1454-466e-8036-b66d7b76ea37\") " Jan 30 17:11:50 crc kubenswrapper[4875]: I0130 17:11:50.278231 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cacfe404-1454-466e-8036-b66d7b76ea37-operator-scripts\") pod \"cacfe404-1454-466e-8036-b66d7b76ea37\" (UID: \"cacfe404-1454-466e-8036-b66d7b76ea37\") " Jan 30 17:11:50 crc kubenswrapper[4875]: I0130 17:11:50.278969 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cacfe404-1454-466e-8036-b66d7b76ea37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cacfe404-1454-466e-8036-b66d7b76ea37" (UID: "cacfe404-1454-466e-8036-b66d7b76ea37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:11:50 crc kubenswrapper[4875]: I0130 17:11:50.281725 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cacfe404-1454-466e-8036-b66d7b76ea37-kube-api-access-pvcz4" (OuterVolumeSpecName: "kube-api-access-pvcz4") pod "cacfe404-1454-466e-8036-b66d7b76ea37" (UID: "cacfe404-1454-466e-8036-b66d7b76ea37"). InnerVolumeSpecName "kube-api-access-pvcz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:11:50 crc kubenswrapper[4875]: I0130 17:11:50.380494 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvcz4\" (UniqueName: \"kubernetes.io/projected/cacfe404-1454-466e-8036-b66d7b76ea37-kube-api-access-pvcz4\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:50 crc kubenswrapper[4875]: I0130 17:11:50.380536 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cacfe404-1454-466e-8036-b66d7b76ea37-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:50 crc kubenswrapper[4875]: I0130 17:11:50.848793 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-sljmk" Jan 30 17:11:50 crc kubenswrapper[4875]: I0130 17:11:50.848830 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-sljmk" event={"ID":"cacfe404-1454-466e-8036-b66d7b76ea37","Type":"ContainerDied","Data":"c047daf73ea028dd25236ddbe74e2924adc28dfeaeb57f108388dcebc026ec67"} Jan 30 17:11:50 crc kubenswrapper[4875]: I0130 17:11:50.848868 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c047daf73ea028dd25236ddbe74e2924adc28dfeaeb57f108388dcebc026ec67" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.158576 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.300178 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrn55\" (UniqueName: \"kubernetes.io/projected/56e2e23d-0edb-4d09-b421-5bb12f185bdd-kube-api-access-hrn55\") pod \"56e2e23d-0edb-4d09-b421-5bb12f185bdd\" (UID: \"56e2e23d-0edb-4d09-b421-5bb12f185bdd\") " Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.300227 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56e2e23d-0edb-4d09-b421-5bb12f185bdd-operator-scripts\") pod \"56e2e23d-0edb-4d09-b421-5bb12f185bdd\" (UID: \"56e2e23d-0edb-4d09-b421-5bb12f185bdd\") " Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.302125 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56e2e23d-0edb-4d09-b421-5bb12f185bdd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56e2e23d-0edb-4d09-b421-5bb12f185bdd" (UID: "56e2e23d-0edb-4d09-b421-5bb12f185bdd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.305945 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e2e23d-0edb-4d09-b421-5bb12f185bdd-kube-api-access-hrn55" (OuterVolumeSpecName: "kube-api-access-hrn55") pod "56e2e23d-0edb-4d09-b421-5bb12f185bdd" (UID: "56e2e23d-0edb-4d09-b421-5bb12f185bdd"). InnerVolumeSpecName "kube-api-access-hrn55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.355352 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-57627" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.361371 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.401508 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrn55\" (UniqueName: \"kubernetes.io/projected/56e2e23d-0edb-4d09-b421-5bb12f185bdd-kube-api-access-hrn55\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.401534 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56e2e23d-0edb-4d09-b421-5bb12f185bdd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.502095 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnvj7\" (UniqueName: \"kubernetes.io/projected/ccb0c94f-c080-475d-b3e3-5c48b99f7c1f-kube-api-access-dnvj7\") pod \"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f\" (UID: \"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f\") " Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.502156 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh2pw\" (UniqueName: \"kubernetes.io/projected/c19db90e-7888-492f-81aa-3109c80be25b-kube-api-access-jh2pw\") pod \"c19db90e-7888-492f-81aa-3109c80be25b\" (UID: \"c19db90e-7888-492f-81aa-3109c80be25b\") " Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.502202 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccb0c94f-c080-475d-b3e3-5c48b99f7c1f-operator-scripts\") pod \"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f\" (UID: \"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f\") " Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.502259 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c19db90e-7888-492f-81aa-3109c80be25b-operator-scripts\") pod \"c19db90e-7888-492f-81aa-3109c80be25b\" (UID: \"c19db90e-7888-492f-81aa-3109c80be25b\") " Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.503370 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c19db90e-7888-492f-81aa-3109c80be25b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c19db90e-7888-492f-81aa-3109c80be25b" (UID: "c19db90e-7888-492f-81aa-3109c80be25b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.503738 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb0c94f-c080-475d-b3e3-5c48b99f7c1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccb0c94f-c080-475d-b3e3-5c48b99f7c1f" (UID: "ccb0c94f-c080-475d-b3e3-5c48b99f7c1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.506516 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c19db90e-7888-492f-81aa-3109c80be25b-kube-api-access-jh2pw" (OuterVolumeSpecName: "kube-api-access-jh2pw") pod "c19db90e-7888-492f-81aa-3109c80be25b" (UID: "c19db90e-7888-492f-81aa-3109c80be25b"). InnerVolumeSpecName "kube-api-access-jh2pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.506576 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb0c94f-c080-475d-b3e3-5c48b99f7c1f-kube-api-access-dnvj7" (OuterVolumeSpecName: "kube-api-access-dnvj7") pod "ccb0c94f-c080-475d-b3e3-5c48b99f7c1f" (UID: "ccb0c94f-c080-475d-b3e3-5c48b99f7c1f"). InnerVolumeSpecName "kube-api-access-dnvj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.603879 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnvj7\" (UniqueName: \"kubernetes.io/projected/ccb0c94f-c080-475d-b3e3-5c48b99f7c1f-kube-api-access-dnvj7\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.603933 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh2pw\" (UniqueName: \"kubernetes.io/projected/c19db90e-7888-492f-81aa-3109c80be25b-kube-api-access-jh2pw\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.603945 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccb0c94f-c080-475d-b3e3-5c48b99f7c1f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.603954 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c19db90e-7888-492f-81aa-3109c80be25b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.858184 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" event={"ID":"ccb0c94f-c080-475d-b3e3-5c48b99f7c1f","Type":"ContainerDied","Data":"9a9d427514972acad5a4de28072848c48a6b79f666275455b853b5fa2b026b2f"} Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.858226 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a9d427514972acad5a4de28072848c48a6b79f666275455b853b5fa2b026b2f" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.858565 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-3844-account-create-update-hgdr6" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.859749 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.859749 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-ec5d-account-create-update-m7bv7" event={"ID":"56e2e23d-0edb-4d09-b421-5bb12f185bdd","Type":"ContainerDied","Data":"3eaa14e800b28d92e1a2852bfb869d8dc91e2c2bf72751e2e150cc00c8630f4e"} Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.859870 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eaa14e800b28d92e1a2852bfb869d8dc91e2c2bf72751e2e150cc00c8630f4e" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.861672 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-57627" event={"ID":"c19db90e-7888-492f-81aa-3109c80be25b","Type":"ContainerDied","Data":"b0321526068a56fd33c059b87a6b6767ee2dc280ac05793d78803a5058aa8898"} Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.861692 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0321526068a56fd33c059b87a6b6767ee2dc280ac05793d78803a5058aa8898" Jan 30 17:11:51 crc kubenswrapper[4875]: I0130 17:11:51.861726 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-57627" Jan 30 17:11:52 crc kubenswrapper[4875]: I0130 17:11:52.617001 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-2hhm8"] Jan 30 17:11:52 crc kubenswrapper[4875]: I0130 17:11:52.628159 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-2hhm8"] Jan 30 17:11:54 crc kubenswrapper[4875]: I0130 17:11:54.144131 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfa8fed6-37a9-43b3-a358-2d4e06a89eb0" path="/var/lib/kubelet/pods/cfa8fed6-37a9-43b3-a358-2d4e06a89eb0/volumes" Jan 30 17:11:56 crc kubenswrapper[4875]: I0130 17:11:56.288022 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:11:56 crc kubenswrapper[4875]: I0130 17:11:56.288307 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.597852 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-d79kf"] Jan 30 17:11:57 crc kubenswrapper[4875]: E0130 17:11:57.598412 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19db90e-7888-492f-81aa-3109c80be25b" containerName="mariadb-database-create" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.598424 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19db90e-7888-492f-81aa-3109c80be25b" containerName="mariadb-database-create" Jan 30 17:11:57 crc kubenswrapper[4875]: E0130 17:11:57.598457 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacfe404-1454-466e-8036-b66d7b76ea37" containerName="mariadb-database-create" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.598464 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacfe404-1454-466e-8036-b66d7b76ea37" containerName="mariadb-database-create" Jan 30 17:11:57 crc kubenswrapper[4875]: E0130 17:11:57.598479 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa8fed6-37a9-43b3-a358-2d4e06a89eb0" containerName="mariadb-account-create-update" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.598485 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa8fed6-37a9-43b3-a358-2d4e06a89eb0" containerName="mariadb-account-create-update" Jan 30 17:11:57 crc kubenswrapper[4875]: E0130 17:11:57.598495 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e2e23d-0edb-4d09-b421-5bb12f185bdd" containerName="mariadb-account-create-update" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.598501 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e2e23d-0edb-4d09-b421-5bb12f185bdd" containerName="mariadb-account-create-update" Jan 30 17:11:57 crc kubenswrapper[4875]: E0130 17:11:57.598514 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb0c94f-c080-475d-b3e3-5c48b99f7c1f" containerName="mariadb-account-create-update" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.598520 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb0c94f-c080-475d-b3e3-5c48b99f7c1f" containerName="mariadb-account-create-update" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.598668 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="cacfe404-1454-466e-8036-b66d7b76ea37" containerName="mariadb-database-create" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.598678 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb0c94f-c080-475d-b3e3-5c48b99f7c1f" containerName="mariadb-account-create-update" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.598685 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfa8fed6-37a9-43b3-a358-2d4e06a89eb0" containerName="mariadb-account-create-update" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.598698 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="c19db90e-7888-492f-81aa-3109c80be25b" containerName="mariadb-database-create" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.598710 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e2e23d-0edb-4d09-b421-5bb12f185bdd" containerName="mariadb-account-create-update" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.599170 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-d79kf" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.603621 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-cell1-mariadb-root-db-secret" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.611750 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-d79kf"] Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.698573 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a569857-e743-4fec-8bc5-63bdec8c8b0c-operator-scripts\") pod \"root-account-create-update-d79kf\" (UID: \"9a569857-e743-4fec-8bc5-63bdec8c8b0c\") " pod="nova-kuttl-default/root-account-create-update-d79kf" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.698705 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nxj2\" (UniqueName: \"kubernetes.io/projected/9a569857-e743-4fec-8bc5-63bdec8c8b0c-kube-api-access-8nxj2\") pod \"root-account-create-update-d79kf\" (UID: \"9a569857-e743-4fec-8bc5-63bdec8c8b0c\") " pod="nova-kuttl-default/root-account-create-update-d79kf" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.800245 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a569857-e743-4fec-8bc5-63bdec8c8b0c-operator-scripts\") pod \"root-account-create-update-d79kf\" (UID: \"9a569857-e743-4fec-8bc5-63bdec8c8b0c\") " pod="nova-kuttl-default/root-account-create-update-d79kf" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.800377 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nxj2\" (UniqueName: \"kubernetes.io/projected/9a569857-e743-4fec-8bc5-63bdec8c8b0c-kube-api-access-8nxj2\") pod \"root-account-create-update-d79kf\" (UID: \"9a569857-e743-4fec-8bc5-63bdec8c8b0c\") " pod="nova-kuttl-default/root-account-create-update-d79kf" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.801003 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a569857-e743-4fec-8bc5-63bdec8c8b0c-operator-scripts\") pod \"root-account-create-update-d79kf\" (UID: \"9a569857-e743-4fec-8bc5-63bdec8c8b0c\") " pod="nova-kuttl-default/root-account-create-update-d79kf" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.821545 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nxj2\" (UniqueName: \"kubernetes.io/projected/9a569857-e743-4fec-8bc5-63bdec8c8b0c-kube-api-access-8nxj2\") pod \"root-account-create-update-d79kf\" (UID: \"9a569857-e743-4fec-8bc5-63bdec8c8b0c\") " pod="nova-kuttl-default/root-account-create-update-d79kf" Jan 30 17:11:57 crc kubenswrapper[4875]: I0130 17:11:57.915473 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-d79kf" Jan 30 17:11:59 crc kubenswrapper[4875]: I0130 17:11:59.134899 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-d79kf"] Jan 30 17:11:59 crc kubenswrapper[4875]: W0130 17:11:59.135695 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a569857_e743_4fec_8bc5_63bdec8c8b0c.slice/crio-68f1b93e14cd2046c8ce1011768d8c310edea3491398993c7e70be3c004b8912 WatchSource:0}: Error finding container 68f1b93e14cd2046c8ce1011768d8c310edea3491398993c7e70be3c004b8912: Status 404 returned error can't find the container with id 68f1b93e14cd2046c8ce1011768d8c310edea3491398993c7e70be3c004b8912 Jan 30 17:11:59 crc kubenswrapper[4875]: I0130 17:11:59.925283 4875 generic.go:334] "Generic (PLEG): container finished" podID="9a569857-e743-4fec-8bc5-63bdec8c8b0c" containerID="010a44139337623bf89ddd7ce1765ba779ece6332e73e2b5def9f6ad4ad53fe7" exitCode=0 Jan 30 17:11:59 crc kubenswrapper[4875]: I0130 17:11:59.925378 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-d79kf" event={"ID":"9a569857-e743-4fec-8bc5-63bdec8c8b0c","Type":"ContainerDied","Data":"010a44139337623bf89ddd7ce1765ba779ece6332e73e2b5def9f6ad4ad53fe7"} Jan 30 17:11:59 crc kubenswrapper[4875]: I0130 17:11:59.925630 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-d79kf" event={"ID":"9a569857-e743-4fec-8bc5-63bdec8c8b0c","Type":"ContainerStarted","Data":"68f1b93e14cd2046c8ce1011768d8c310edea3491398993c7e70be3c004b8912"} Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.324299 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-d79kf" Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.446722 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nxj2\" (UniqueName: \"kubernetes.io/projected/9a569857-e743-4fec-8bc5-63bdec8c8b0c-kube-api-access-8nxj2\") pod \"9a569857-e743-4fec-8bc5-63bdec8c8b0c\" (UID: \"9a569857-e743-4fec-8bc5-63bdec8c8b0c\") " Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.447022 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a569857-e743-4fec-8bc5-63bdec8c8b0c-operator-scripts\") pod \"9a569857-e743-4fec-8bc5-63bdec8c8b0c\" (UID: \"9a569857-e743-4fec-8bc5-63bdec8c8b0c\") " Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.447783 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a569857-e743-4fec-8bc5-63bdec8c8b0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a569857-e743-4fec-8bc5-63bdec8c8b0c" (UID: "9a569857-e743-4fec-8bc5-63bdec8c8b0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.453109 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a569857-e743-4fec-8bc5-63bdec8c8b0c-kube-api-access-8nxj2" (OuterVolumeSpecName: "kube-api-access-8nxj2") pod "9a569857-e743-4fec-8bc5-63bdec8c8b0c" (UID: "9a569857-e743-4fec-8bc5-63bdec8c8b0c"). InnerVolumeSpecName "kube-api-access-8nxj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.548562 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a569857-e743-4fec-8bc5-63bdec8c8b0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.548617 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nxj2\" (UniqueName: \"kubernetes.io/projected/9a569857-e743-4fec-8bc5-63bdec8c8b0c-kube-api-access-8nxj2\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.939831 4875 generic.go:334] "Generic (PLEG): container finished" podID="b6ee4eec-358c-45f7-9b1a-143de69b2929" containerID="db19b1b372af10769f691a279cf83d3ba834652a8b77d5ecefd94d67b753ab5a" exitCode=0 Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.939881 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"b6ee4eec-358c-45f7-9b1a-143de69b2929","Type":"ContainerDied","Data":"db19b1b372af10769f691a279cf83d3ba834652a8b77d5ecefd94d67b753ab5a"} Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.941220 4875 generic.go:334] "Generic (PLEG): container finished" podID="2d4b13af-d4ec-458c-b3a9-e060171110f6" containerID="eb30f81307a858e52ea1cfe34c51f09ae1c873df6e9b2455bf69d6a47ae050c9" exitCode=0 Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.941280 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"2d4b13af-d4ec-458c-b3a9-e060171110f6","Type":"ContainerDied","Data":"eb30f81307a858e52ea1cfe34c51f09ae1c873df6e9b2455bf69d6a47ae050c9"} Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.943886 4875 generic.go:334] "Generic (PLEG): container finished" podID="e75a0606-ea82-4ab9-8245-feb3105a23ba" containerID="989b0195b567cec8a307efba99699345219f928641e0ab411fb8c13c86651c44" exitCode=0 Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.943938 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"e75a0606-ea82-4ab9-8245-feb3105a23ba","Type":"ContainerDied","Data":"989b0195b567cec8a307efba99699345219f928641e0ab411fb8c13c86651c44"} Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.947009 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-d79kf" event={"ID":"9a569857-e743-4fec-8bc5-63bdec8c8b0c","Type":"ContainerDied","Data":"68f1b93e14cd2046c8ce1011768d8c310edea3491398993c7e70be3c004b8912"} Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.947056 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68f1b93e14cd2046c8ce1011768d8c310edea3491398993c7e70be3c004b8912" Jan 30 17:12:01 crc kubenswrapper[4875]: I0130 17:12:01.947120 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-d79kf" Jan 30 17:12:02 crc kubenswrapper[4875]: I0130 17:12:02.955534 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"e75a0606-ea82-4ab9-8245-feb3105a23ba","Type":"ContainerStarted","Data":"abcd5fbbeaf193a56d64a0481863591fcb66f2d444473803b597c5cd252bc3ce"} Jan 30 17:12:02 crc kubenswrapper[4875]: I0130 17:12:02.956043 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:12:02 crc kubenswrapper[4875]: I0130 17:12:02.958680 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"b6ee4eec-358c-45f7-9b1a-143de69b2929","Type":"ContainerStarted","Data":"8e0139c73f9ad124615224ae99c41e1b05bda700058605c370d72dabb0d64517"} Jan 30 17:12:02 crc kubenswrapper[4875]: I0130 17:12:02.959242 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:12:02 crc kubenswrapper[4875]: I0130 17:12:02.960763 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"2d4b13af-d4ec-458c-b3a9-e060171110f6","Type":"ContainerStarted","Data":"88e6dd4e98be864b72981dc5e06bcf2ac93875bc325e2b59e592c7f85907bef6"} Jan 30 17:12:02 crc kubenswrapper[4875]: I0130 17:12:02.961029 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:12:02 crc kubenswrapper[4875]: I0130 17:12:02.981311 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-server-0" podStartSLOduration=37.608145211 podStartE2EDuration="48.981292223s" podCreationTimestamp="2026-01-30 17:11:14 +0000 UTC" firstStartedPulling="2026-01-30 17:11:16.873503738 +0000 UTC m=+887.420867121" lastFinishedPulling="2026-01-30 17:11:28.24665072 +0000 UTC m=+898.794014133" observedRunningTime="2026-01-30 17:12:02.976043702 +0000 UTC m=+933.523407085" watchObservedRunningTime="2026-01-30 17:12:02.981292223 +0000 UTC m=+933.528655606" Jan 30 17:12:03 crc kubenswrapper[4875]: I0130 17:12:03.005449 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podStartSLOduration=36.814063853 podStartE2EDuration="48.005432026s" podCreationTimestamp="2026-01-30 17:11:15 +0000 UTC" firstStartedPulling="2026-01-30 17:11:17.100469178 +0000 UTC m=+887.647832561" lastFinishedPulling="2026-01-30 17:11:28.291837351 +0000 UTC m=+898.839200734" observedRunningTime="2026-01-30 17:12:02.997456481 +0000 UTC m=+933.544819864" watchObservedRunningTime="2026-01-30 17:12:03.005432026 +0000 UTC m=+933.552795409" Jan 30 17:12:03 crc kubenswrapper[4875]: I0130 17:12:03.021552 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podStartSLOduration=36.708057703 podStartE2EDuration="48.021534043s" podCreationTimestamp="2026-01-30 17:11:15 +0000 UTC" firstStartedPulling="2026-01-30 17:11:16.96471601 +0000 UTC m=+887.512079393" lastFinishedPulling="2026-01-30 17:11:28.27819235 +0000 UTC m=+898.825555733" observedRunningTime="2026-01-30 17:12:03.017607977 +0000 UTC m=+933.564971370" watchObservedRunningTime="2026-01-30 17:12:03.021534043 +0000 UTC m=+933.568897426" Jan 30 17:12:16 crc kubenswrapper[4875]: I0130 17:12:16.443828 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-server-0" Jan 30 17:12:16 crc kubenswrapper[4875]: I0130 17:12:16.665458 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 30 17:12:16 crc kubenswrapper[4875]: I0130 17:12:16.708125 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.073141 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-sync-xcxvd"] Jan 30 17:12:17 crc kubenswrapper[4875]: E0130 17:12:17.073435 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a569857-e743-4fec-8bc5-63bdec8c8b0c" containerName="mariadb-account-create-update" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.073451 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a569857-e743-4fec-8bc5-63bdec8c8b0c" containerName="mariadb-account-create-update" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.073609 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a569857-e743-4fec-8bc5-63bdec8c8b0c" containerName="mariadb-account-create-update" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.074060 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-xcxvd" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.076419 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-8b6fj" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.076499 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.076536 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.076926 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.085934 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-xcxvd"] Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.183656 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2064320-5eaf-4bef-af21-eb2812fcbd4a-combined-ca-bundle\") pod \"keystone-db-sync-xcxvd\" (UID: \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\") " pod="nova-kuttl-default/keystone-db-sync-xcxvd" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.183754 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg8pj\" (UniqueName: \"kubernetes.io/projected/b2064320-5eaf-4bef-af21-eb2812fcbd4a-kube-api-access-hg8pj\") pod \"keystone-db-sync-xcxvd\" (UID: \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\") " pod="nova-kuttl-default/keystone-db-sync-xcxvd" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.183790 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2064320-5eaf-4bef-af21-eb2812fcbd4a-config-data\") pod \"keystone-db-sync-xcxvd\" (UID: \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\") " pod="nova-kuttl-default/keystone-db-sync-xcxvd" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.285459 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2064320-5eaf-4bef-af21-eb2812fcbd4a-combined-ca-bundle\") pod \"keystone-db-sync-xcxvd\" (UID: \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\") " pod="nova-kuttl-default/keystone-db-sync-xcxvd" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.285530 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg8pj\" (UniqueName: \"kubernetes.io/projected/b2064320-5eaf-4bef-af21-eb2812fcbd4a-kube-api-access-hg8pj\") pod \"keystone-db-sync-xcxvd\" (UID: \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\") " pod="nova-kuttl-default/keystone-db-sync-xcxvd" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.285555 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2064320-5eaf-4bef-af21-eb2812fcbd4a-config-data\") pod \"keystone-db-sync-xcxvd\" (UID: \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\") " pod="nova-kuttl-default/keystone-db-sync-xcxvd" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.291288 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2064320-5eaf-4bef-af21-eb2812fcbd4a-config-data\") pod \"keystone-db-sync-xcxvd\" (UID: \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\") " pod="nova-kuttl-default/keystone-db-sync-xcxvd" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.294884 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2064320-5eaf-4bef-af21-eb2812fcbd4a-combined-ca-bundle\") pod \"keystone-db-sync-xcxvd\" (UID: \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\") " pod="nova-kuttl-default/keystone-db-sync-xcxvd" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.307328 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg8pj\" (UniqueName: \"kubernetes.io/projected/b2064320-5eaf-4bef-af21-eb2812fcbd4a-kube-api-access-hg8pj\") pod \"keystone-db-sync-xcxvd\" (UID: \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\") " pod="nova-kuttl-default/keystone-db-sync-xcxvd" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.391357 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-xcxvd" Jan 30 17:12:17 crc kubenswrapper[4875]: I0130 17:12:17.858809 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-xcxvd"] Jan 30 17:12:18 crc kubenswrapper[4875]: I0130 17:12:18.086313 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-xcxvd" event={"ID":"b2064320-5eaf-4bef-af21-eb2812fcbd4a","Type":"ContainerStarted","Data":"b460fa8e6a3a34bfe3ecaf50d9f05b9d6e3448ae31b3ad0b8f64ad5736bc8d04"} Jan 30 17:12:26 crc kubenswrapper[4875]: I0130 17:12:26.287765 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:12:26 crc kubenswrapper[4875]: I0130 17:12:26.288281 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:12:26 crc kubenswrapper[4875]: I0130 17:12:26.288320 4875 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 17:12:26 crc kubenswrapper[4875]: I0130 17:12:26.288903 4875 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed42a4c14dffd4d7e8ff0992005f668baba6e088536dd037290ec2423738d85a"} pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:12:26 crc kubenswrapper[4875]: I0130 17:12:26.288955 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" containerID="cri-o://ed42a4c14dffd4d7e8ff0992005f668baba6e088536dd037290ec2423738d85a" gracePeriod=600 Jan 30 17:12:27 crc kubenswrapper[4875]: I0130 17:12:27.151945 4875 generic.go:334] "Generic (PLEG): container finished" podID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerID="ed42a4c14dffd4d7e8ff0992005f668baba6e088536dd037290ec2423738d85a" exitCode=0 Jan 30 17:12:27 crc kubenswrapper[4875]: I0130 17:12:27.152050 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerDied","Data":"ed42a4c14dffd4d7e8ff0992005f668baba6e088536dd037290ec2423738d85a"} Jan 30 17:12:27 crc kubenswrapper[4875]: I0130 17:12:27.152977 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerStarted","Data":"6514542be49997aad4594ad0a6547ac470439752a0efaf44fa7c391eb010bcf6"} Jan 30 17:12:27 crc kubenswrapper[4875]: I0130 17:12:27.153015 4875 scope.go:117] "RemoveContainer" containerID="44cbbe2347c99f305a77309b497f459a3e30dcbc1e853b9af4c1697fcc292f86" Jan 30 17:12:27 crc kubenswrapper[4875]: I0130 17:12:27.154580 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-xcxvd" event={"ID":"b2064320-5eaf-4bef-af21-eb2812fcbd4a","Type":"ContainerStarted","Data":"2828ba94a8a28e3012090b9be4de56dd7a71fcb4ff38037b8226c0244fd4d980"} Jan 30 17:12:27 crc kubenswrapper[4875]: I0130 17:12:27.188894 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-db-sync-xcxvd" podStartSLOduration=1.5241929170000001 podStartE2EDuration="10.18887319s" podCreationTimestamp="2026-01-30 17:12:17 +0000 UTC" firstStartedPulling="2026-01-30 17:12:17.866022112 +0000 UTC m=+948.413385495" lastFinishedPulling="2026-01-30 17:12:26.530702385 +0000 UTC m=+957.078065768" observedRunningTime="2026-01-30 17:12:27.187682209 +0000 UTC m=+957.735045592" watchObservedRunningTime="2026-01-30 17:12:27.18887319 +0000 UTC m=+957.736236573" Jan 30 17:12:30 crc kubenswrapper[4875]: I0130 17:12:30.181162 4875 generic.go:334] "Generic (PLEG): container finished" podID="b2064320-5eaf-4bef-af21-eb2812fcbd4a" containerID="2828ba94a8a28e3012090b9be4de56dd7a71fcb4ff38037b8226c0244fd4d980" exitCode=0 Jan 30 17:12:30 crc kubenswrapper[4875]: I0130 17:12:30.181706 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-xcxvd" event={"ID":"b2064320-5eaf-4bef-af21-eb2812fcbd4a","Type":"ContainerDied","Data":"2828ba94a8a28e3012090b9be4de56dd7a71fcb4ff38037b8226c0244fd4d980"} Jan 30 17:12:31 crc kubenswrapper[4875]: I0130 17:12:31.467794 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-xcxvd" Jan 30 17:12:31 crc kubenswrapper[4875]: I0130 17:12:31.498055 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2064320-5eaf-4bef-af21-eb2812fcbd4a-config-data\") pod \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\" (UID: \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\") " Jan 30 17:12:31 crc kubenswrapper[4875]: I0130 17:12:31.498158 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2064320-5eaf-4bef-af21-eb2812fcbd4a-combined-ca-bundle\") pod \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\" (UID: \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\") " Jan 30 17:12:31 crc kubenswrapper[4875]: I0130 17:12:31.498199 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg8pj\" (UniqueName: \"kubernetes.io/projected/b2064320-5eaf-4bef-af21-eb2812fcbd4a-kube-api-access-hg8pj\") pod \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\" (UID: \"b2064320-5eaf-4bef-af21-eb2812fcbd4a\") " Jan 30 17:12:31 crc kubenswrapper[4875]: I0130 17:12:31.503824 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2064320-5eaf-4bef-af21-eb2812fcbd4a-kube-api-access-hg8pj" (OuterVolumeSpecName: "kube-api-access-hg8pj") pod "b2064320-5eaf-4bef-af21-eb2812fcbd4a" (UID: "b2064320-5eaf-4bef-af21-eb2812fcbd4a"). InnerVolumeSpecName "kube-api-access-hg8pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:12:31 crc kubenswrapper[4875]: I0130 17:12:31.518505 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2064320-5eaf-4bef-af21-eb2812fcbd4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2064320-5eaf-4bef-af21-eb2812fcbd4a" (UID: "b2064320-5eaf-4bef-af21-eb2812fcbd4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:31 crc kubenswrapper[4875]: I0130 17:12:31.531797 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2064320-5eaf-4bef-af21-eb2812fcbd4a-config-data" (OuterVolumeSpecName: "config-data") pod "b2064320-5eaf-4bef-af21-eb2812fcbd4a" (UID: "b2064320-5eaf-4bef-af21-eb2812fcbd4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:31 crc kubenswrapper[4875]: I0130 17:12:31.600060 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2064320-5eaf-4bef-af21-eb2812fcbd4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:31 crc kubenswrapper[4875]: I0130 17:12:31.600094 4875 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2064320-5eaf-4bef-af21-eb2812fcbd4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:31 crc kubenswrapper[4875]: I0130 17:12:31.600106 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg8pj\" (UniqueName: \"kubernetes.io/projected/b2064320-5eaf-4bef-af21-eb2812fcbd4a-kube-api-access-hg8pj\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.196055 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-xcxvd" event={"ID":"b2064320-5eaf-4bef-af21-eb2812fcbd4a","Type":"ContainerDied","Data":"b460fa8e6a3a34bfe3ecaf50d9f05b9d6e3448ae31b3ad0b8f64ad5736bc8d04"} Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.196391 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b460fa8e6a3a34bfe3ecaf50d9f05b9d6e3448ae31b3ad0b8f64ad5736bc8d04" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.196285 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-xcxvd" Jan 30 17:12:32 crc kubenswrapper[4875]: E0130 17:12:32.285818 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2064320_5eaf_4bef_af21_eb2812fcbd4a.slice/crio-b460fa8e6a3a34bfe3ecaf50d9f05b9d6e3448ae31b3ad0b8f64ad5736bc8d04\": RecentStats: unable to find data in memory cache]" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.384980 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-rh6ws"] Jan 30 17:12:32 crc kubenswrapper[4875]: E0130 17:12:32.385264 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2064320-5eaf-4bef-af21-eb2812fcbd4a" containerName="keystone-db-sync" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.385280 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2064320-5eaf-4bef-af21-eb2812fcbd4a" containerName="keystone-db-sync" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.385425 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2064320-5eaf-4bef-af21-eb2812fcbd4a" containerName="keystone-db-sync" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.385872 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.394036 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.394313 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.394450 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.394558 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-rh6ws"] Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.394607 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-8b6fj" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.402031 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.410203 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-config-data\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.410258 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-scripts\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.410396 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-credential-keys\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.410431 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-fernet-keys\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.411250 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-combined-ca-bundle\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.411297 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfzqb\" (UniqueName: \"kubernetes.io/projected/5c7c2f10-e37b-481d-8797-ca3ab84c2106-kube-api-access-sfzqb\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.511936 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-config-data\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.511984 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-scripts\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.512024 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-credential-keys\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.512046 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-fernet-keys\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.512094 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-combined-ca-bundle\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.512114 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfzqb\" (UniqueName: \"kubernetes.io/projected/5c7c2f10-e37b-481d-8797-ca3ab84c2106-kube-api-access-sfzqb\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.515289 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-credential-keys\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.515517 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-scripts\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.515988 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-fernet-keys\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.515995 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-combined-ca-bundle\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.523624 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-config-data\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.528047 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfzqb\" (UniqueName: \"kubernetes.io/projected/5c7c2f10-e37b-481d-8797-ca3ab84c2106-kube-api-access-sfzqb\") pod \"keystone-bootstrap-rh6ws\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.616934 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-sync-jc9wl"] Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.618029 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.620149 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.620899 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.621121 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-fmzq4" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.658540 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-jc9wl"] Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.707145 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.716529 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-scripts\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.716644 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-config-data\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.716671 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-combined-ca-bundle\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.716687 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a08451c-5704-47a5-ae37-83a7f01bc502-logs\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.716712 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnkcd\" (UniqueName: \"kubernetes.io/projected/2a08451c-5704-47a5-ae37-83a7f01bc502-kube-api-access-lnkcd\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.817942 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-config-data\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.818413 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-combined-ca-bundle\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.818445 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a08451c-5704-47a5-ae37-83a7f01bc502-logs\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.818483 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnkcd\" (UniqueName: \"kubernetes.io/projected/2a08451c-5704-47a5-ae37-83a7f01bc502-kube-api-access-lnkcd\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.818524 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-scripts\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.819789 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a08451c-5704-47a5-ae37-83a7f01bc502-logs\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.829815 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-scripts\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.830062 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-config-data\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.832103 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-combined-ca-bundle\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.841003 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnkcd\" (UniqueName: \"kubernetes.io/projected/2a08451c-5704-47a5-ae37-83a7f01bc502-kube-api-access-lnkcd\") pod \"placement-db-sync-jc9wl\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:32 crc kubenswrapper[4875]: I0130 17:12:32.938339 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:33 crc kubenswrapper[4875]: I0130 17:12:33.163308 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-rh6ws"] Jan 30 17:12:33 crc kubenswrapper[4875]: W0130 17:12:33.169727 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c7c2f10_e37b_481d_8797_ca3ab84c2106.slice/crio-9ae220870cf8bb72f9ec3fc06976bc0e8516e409a64e07d48ae6199c5b3733a2 WatchSource:0}: Error finding container 9ae220870cf8bb72f9ec3fc06976bc0e8516e409a64e07d48ae6199c5b3733a2: Status 404 returned error can't find the container with id 9ae220870cf8bb72f9ec3fc06976bc0e8516e409a64e07d48ae6199c5b3733a2 Jan 30 17:12:33 crc kubenswrapper[4875]: I0130 17:12:33.203270 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-rh6ws" event={"ID":"5c7c2f10-e37b-481d-8797-ca3ab84c2106","Type":"ContainerStarted","Data":"9ae220870cf8bb72f9ec3fc06976bc0e8516e409a64e07d48ae6199c5b3733a2"} Jan 30 17:12:33 crc kubenswrapper[4875]: I0130 17:12:33.397222 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-jc9wl"] Jan 30 17:12:34 crc kubenswrapper[4875]: I0130 17:12:34.213172 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-rh6ws" event={"ID":"5c7c2f10-e37b-481d-8797-ca3ab84c2106","Type":"ContainerStarted","Data":"dac3a03bf6b19c8eefa4d87a19106ed98c4b4313745986e919fc1d08b0db2e74"} Jan 30 17:12:34 crc kubenswrapper[4875]: I0130 17:12:34.214653 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-jc9wl" event={"ID":"2a08451c-5704-47a5-ae37-83a7f01bc502","Type":"ContainerStarted","Data":"c335d17e7508fec1e69344c3764c09146570d713210059ed5befb7f960300bb4"} Jan 30 17:12:34 crc kubenswrapper[4875]: I0130 17:12:34.231984 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-rh6ws" podStartSLOduration=2.231968649 podStartE2EDuration="2.231968649s" podCreationTimestamp="2026-01-30 17:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:12:34.230071774 +0000 UTC m=+964.777435157" watchObservedRunningTime="2026-01-30 17:12:34.231968649 +0000 UTC m=+964.779332032" Jan 30 17:12:36 crc kubenswrapper[4875]: I0130 17:12:36.228938 4875 generic.go:334] "Generic (PLEG): container finished" podID="5c7c2f10-e37b-481d-8797-ca3ab84c2106" containerID="dac3a03bf6b19c8eefa4d87a19106ed98c4b4313745986e919fc1d08b0db2e74" exitCode=0 Jan 30 17:12:36 crc kubenswrapper[4875]: I0130 17:12:36.229215 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-rh6ws" event={"ID":"5c7c2f10-e37b-481d-8797-ca3ab84c2106","Type":"ContainerDied","Data":"dac3a03bf6b19c8eefa4d87a19106ed98c4b4313745986e919fc1d08b0db2e74"} Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.242173 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-jc9wl" event={"ID":"2a08451c-5704-47a5-ae37-83a7f01bc502","Type":"ContainerStarted","Data":"3fa5d8c7f70a96347025ba4932a4eb1ab36d6f79b84230dde8c14dfee264212d"} Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.271317 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-db-sync-jc9wl" podStartSLOduration=2.010945375 podStartE2EDuration="5.271296507s" podCreationTimestamp="2026-01-30 17:12:32 +0000 UTC" firstStartedPulling="2026-01-30 17:12:33.404060621 +0000 UTC m=+963.951424004" lastFinishedPulling="2026-01-30 17:12:36.664411753 +0000 UTC m=+967.211775136" observedRunningTime="2026-01-30 17:12:37.269734582 +0000 UTC m=+967.817097975" watchObservedRunningTime="2026-01-30 17:12:37.271296507 +0000 UTC m=+967.818659890" Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.535156 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.685050 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfzqb\" (UniqueName: \"kubernetes.io/projected/5c7c2f10-e37b-481d-8797-ca3ab84c2106-kube-api-access-sfzqb\") pod \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.685168 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-config-data\") pod \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.685210 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-credential-keys\") pod \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.685330 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-fernet-keys\") pod \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.685502 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-scripts\") pod \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.685705 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-combined-ca-bundle\") pod \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\" (UID: \"5c7c2f10-e37b-481d-8797-ca3ab84c2106\") " Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.690731 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5c7c2f10-e37b-481d-8797-ca3ab84c2106" (UID: "5c7c2f10-e37b-481d-8797-ca3ab84c2106"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.690887 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5c7c2f10-e37b-481d-8797-ca3ab84c2106" (UID: "5c7c2f10-e37b-481d-8797-ca3ab84c2106"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.691847 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-scripts" (OuterVolumeSpecName: "scripts") pod "5c7c2f10-e37b-481d-8797-ca3ab84c2106" (UID: "5c7c2f10-e37b-481d-8797-ca3ab84c2106"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.695419 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7c2f10-e37b-481d-8797-ca3ab84c2106-kube-api-access-sfzqb" (OuterVolumeSpecName: "kube-api-access-sfzqb") pod "5c7c2f10-e37b-481d-8797-ca3ab84c2106" (UID: "5c7c2f10-e37b-481d-8797-ca3ab84c2106"). InnerVolumeSpecName "kube-api-access-sfzqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.708076 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-config-data" (OuterVolumeSpecName: "config-data") pod "5c7c2f10-e37b-481d-8797-ca3ab84c2106" (UID: "5c7c2f10-e37b-481d-8797-ca3ab84c2106"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.709801 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c7c2f10-e37b-481d-8797-ca3ab84c2106" (UID: "5c7c2f10-e37b-481d-8797-ca3ab84c2106"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.787448 4875 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.787485 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.787493 4875 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.787507 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfzqb\" (UniqueName: \"kubernetes.io/projected/5c7c2f10-e37b-481d-8797-ca3ab84c2106-kube-api-access-sfzqb\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.787515 4875 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:37 crc kubenswrapper[4875]: I0130 17:12:37.787523 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7c2f10-e37b-481d-8797-ca3ab84c2106-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.256995 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-rh6ws" event={"ID":"5c7c2f10-e37b-481d-8797-ca3ab84c2106","Type":"ContainerDied","Data":"9ae220870cf8bb72f9ec3fc06976bc0e8516e409a64e07d48ae6199c5b3733a2"} Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.257084 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ae220870cf8bb72f9ec3fc06976bc0e8516e409a64e07d48ae6199c5b3733a2" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.257040 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-rh6ws" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.418695 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-rh6ws"] Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.426724 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-rh6ws"] Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.443336 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gpjtc"] Jan 30 17:12:38 crc kubenswrapper[4875]: E0130 17:12:38.443795 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7c2f10-e37b-481d-8797-ca3ab84c2106" containerName="keystone-bootstrap" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.443812 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7c2f10-e37b-481d-8797-ca3ab84c2106" containerName="keystone-bootstrap" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.444006 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7c2f10-e37b-481d-8797-ca3ab84c2106" containerName="keystone-bootstrap" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.444649 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.447627 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.447695 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.447699 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.447840 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-8b6fj" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.448020 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.453916 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gpjtc"] Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.601361 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-scripts\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.601425 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-credential-keys\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.601466 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djq7p\" (UniqueName: \"kubernetes.io/projected/b333d616-9e20-4fcf-8c85-f3c90a6bee75-kube-api-access-djq7p\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.601498 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-config-data\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.601833 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-combined-ca-bundle\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.601922 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-fernet-keys\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.703610 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-scripts\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.703679 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-credential-keys\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.703724 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djq7p\" (UniqueName: \"kubernetes.io/projected/b333d616-9e20-4fcf-8c85-f3c90a6bee75-kube-api-access-djq7p\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.703760 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-config-data\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.703832 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-combined-ca-bundle\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.703867 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-fernet-keys\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.708563 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-fernet-keys\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.708627 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-combined-ca-bundle\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.713379 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-credential-keys\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.714921 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-config-data\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.725211 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-scripts\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.740975 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djq7p\" (UniqueName: \"kubernetes.io/projected/b333d616-9e20-4fcf-8c85-f3c90a6bee75-kube-api-access-djq7p\") pod \"keystone-bootstrap-gpjtc\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:38 crc kubenswrapper[4875]: I0130 17:12:38.825375 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:39 crc kubenswrapper[4875]: I0130 17:12:39.269453 4875 generic.go:334] "Generic (PLEG): container finished" podID="2a08451c-5704-47a5-ae37-83a7f01bc502" containerID="3fa5d8c7f70a96347025ba4932a4eb1ab36d6f79b84230dde8c14dfee264212d" exitCode=0 Jan 30 17:12:39 crc kubenswrapper[4875]: I0130 17:12:39.269775 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-jc9wl" event={"ID":"2a08451c-5704-47a5-ae37-83a7f01bc502","Type":"ContainerDied","Data":"3fa5d8c7f70a96347025ba4932a4eb1ab36d6f79b84230dde8c14dfee264212d"} Jan 30 17:12:39 crc kubenswrapper[4875]: I0130 17:12:39.269947 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gpjtc"] Jan 30 17:12:39 crc kubenswrapper[4875]: W0130 17:12:39.273225 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb333d616_9e20_4fcf_8c85_f3c90a6bee75.slice/crio-3f602c2ffe63eedb70b23f72d46b4b6d0a9b1ae20eb99ff52af1464ba94b2386 WatchSource:0}: Error finding container 3f602c2ffe63eedb70b23f72d46b4b6d0a9b1ae20eb99ff52af1464ba94b2386: Status 404 returned error can't find the container with id 3f602c2ffe63eedb70b23f72d46b4b6d0a9b1ae20eb99ff52af1464ba94b2386 Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.145200 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7c2f10-e37b-481d-8797-ca3ab84c2106" path="/var/lib/kubelet/pods/5c7c2f10-e37b-481d-8797-ca3ab84c2106/volumes" Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.279285 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-gpjtc" event={"ID":"b333d616-9e20-4fcf-8c85-f3c90a6bee75","Type":"ContainerStarted","Data":"65d11b87214bbcdb19a11096312e897c3c2e56a97a48d9b483c34612eb719162"} Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.279361 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-gpjtc" event={"ID":"b333d616-9e20-4fcf-8c85-f3c90a6bee75","Type":"ContainerStarted","Data":"3f602c2ffe63eedb70b23f72d46b4b6d0a9b1ae20eb99ff52af1464ba94b2386"} Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.300616 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-gpjtc" podStartSLOduration=2.300550236 podStartE2EDuration="2.300550236s" podCreationTimestamp="2026-01-30 17:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:12:40.299032803 +0000 UTC m=+970.846396186" watchObservedRunningTime="2026-01-30 17:12:40.300550236 +0000 UTC m=+970.847913639" Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.591093 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.733069 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a08451c-5704-47a5-ae37-83a7f01bc502-logs\") pod \"2a08451c-5704-47a5-ae37-83a7f01bc502\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.733554 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-config-data\") pod \"2a08451c-5704-47a5-ae37-83a7f01bc502\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.734004 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnkcd\" (UniqueName: \"kubernetes.io/projected/2a08451c-5704-47a5-ae37-83a7f01bc502-kube-api-access-lnkcd\") pod \"2a08451c-5704-47a5-ae37-83a7f01bc502\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.734199 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-scripts\") pod \"2a08451c-5704-47a5-ae37-83a7f01bc502\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.734380 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-combined-ca-bundle\") pod \"2a08451c-5704-47a5-ae37-83a7f01bc502\" (UID: \"2a08451c-5704-47a5-ae37-83a7f01bc502\") " Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.734314 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a08451c-5704-47a5-ae37-83a7f01bc502-logs" (OuterVolumeSpecName: "logs") pod "2a08451c-5704-47a5-ae37-83a7f01bc502" (UID: "2a08451c-5704-47a5-ae37-83a7f01bc502"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.735205 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a08451c-5704-47a5-ae37-83a7f01bc502-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.738109 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-scripts" (OuterVolumeSpecName: "scripts") pod "2a08451c-5704-47a5-ae37-83a7f01bc502" (UID: "2a08451c-5704-47a5-ae37-83a7f01bc502"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.738310 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a08451c-5704-47a5-ae37-83a7f01bc502-kube-api-access-lnkcd" (OuterVolumeSpecName: "kube-api-access-lnkcd") pod "2a08451c-5704-47a5-ae37-83a7f01bc502" (UID: "2a08451c-5704-47a5-ae37-83a7f01bc502"). InnerVolumeSpecName "kube-api-access-lnkcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.751878 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a08451c-5704-47a5-ae37-83a7f01bc502" (UID: "2a08451c-5704-47a5-ae37-83a7f01bc502"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.759117 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-config-data" (OuterVolumeSpecName: "config-data") pod "2a08451c-5704-47a5-ae37-83a7f01bc502" (UID: "2a08451c-5704-47a5-ae37-83a7f01bc502"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.836783 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.837131 4875 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.837212 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a08451c-5704-47a5-ae37-83a7f01bc502-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:40 crc kubenswrapper[4875]: I0130 17:12:40.837286 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnkcd\" (UniqueName: \"kubernetes.io/projected/2a08451c-5704-47a5-ae37-83a7f01bc502-kube-api-access-lnkcd\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.289288 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-jc9wl" event={"ID":"2a08451c-5704-47a5-ae37-83a7f01bc502","Type":"ContainerDied","Data":"c335d17e7508fec1e69344c3764c09146570d713210059ed5befb7f960300bb4"} Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.289352 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c335d17e7508fec1e69344c3764c09146570d713210059ed5befb7f960300bb4" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.289318 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-jc9wl" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.389350 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-5d585776fb-7z44m"] Jan 30 17:12:41 crc kubenswrapper[4875]: E0130 17:12:41.390331 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a08451c-5704-47a5-ae37-83a7f01bc502" containerName="placement-db-sync" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.390453 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a08451c-5704-47a5-ae37-83a7f01bc502" containerName="placement-db-sync" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.390729 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a08451c-5704-47a5-ae37-83a7f01bc502" containerName="placement-db-sync" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.391790 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.394335 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.394645 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.394672 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-fmzq4" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.400763 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-5d585776fb-7z44m"] Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.549156 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-combined-ca-bundle\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.549242 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-config-data\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.549307 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-scripts\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.549354 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx4hb\" (UniqueName: \"kubernetes.io/projected/a2370b24-9afc-4626-b761-00e89f8a6b84-kube-api-access-dx4hb\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.549383 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2370b24-9afc-4626-b761-00e89f8a6b84-logs\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.650154 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-combined-ca-bundle\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.650217 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-config-data\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.650270 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-scripts\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.650306 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx4hb\" (UniqueName: \"kubernetes.io/projected/a2370b24-9afc-4626-b761-00e89f8a6b84-kube-api-access-dx4hb\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.650323 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2370b24-9afc-4626-b761-00e89f8a6b84-logs\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.650737 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2370b24-9afc-4626-b761-00e89f8a6b84-logs\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.655312 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-config-data\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.657059 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-scripts\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.658186 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-combined-ca-bundle\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.670264 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx4hb\" (UniqueName: \"kubernetes.io/projected/a2370b24-9afc-4626-b761-00e89f8a6b84-kube-api-access-dx4hb\") pod \"placement-5d585776fb-7z44m\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:41 crc kubenswrapper[4875]: I0130 17:12:41.708291 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:42 crc kubenswrapper[4875]: I0130 17:12:42.168636 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-5d585776fb-7z44m"] Jan 30 17:12:42 crc kubenswrapper[4875]: I0130 17:12:42.296064 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-5d585776fb-7z44m" event={"ID":"a2370b24-9afc-4626-b761-00e89f8a6b84","Type":"ContainerStarted","Data":"4b29b3dd9990a63059e55ab02a5d1c65f0c70d0d9856f36ba7a9b45a66a72d02"} Jan 30 17:12:43 crc kubenswrapper[4875]: I0130 17:12:43.305565 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-5d585776fb-7z44m" event={"ID":"a2370b24-9afc-4626-b761-00e89f8a6b84","Type":"ContainerStarted","Data":"df429df0ea9794f45319f6a0e1565b428bff05814e5b22aec677c7ed70d9c5ff"} Jan 30 17:12:44 crc kubenswrapper[4875]: I0130 17:12:44.315012 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-5d585776fb-7z44m" event={"ID":"a2370b24-9afc-4626-b761-00e89f8a6b84","Type":"ContainerStarted","Data":"7d2643ac15756f8659d7b75d39b1e1c9f307ff226a0d06e64e4e08184b1ab421"} Jan 30 17:12:44 crc kubenswrapper[4875]: I0130 17:12:44.315741 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:44 crc kubenswrapper[4875]: I0130 17:12:44.316085 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:12:44 crc kubenswrapper[4875]: I0130 17:12:44.341346 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-5d585776fb-7z44m" podStartSLOduration=3.341319967 podStartE2EDuration="3.341319967s" podCreationTimestamp="2026-01-30 17:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:12:44.332473088 +0000 UTC m=+974.879836471" watchObservedRunningTime="2026-01-30 17:12:44.341319967 +0000 UTC m=+974.888683350" Jan 30 17:12:46 crc kubenswrapper[4875]: I0130 17:12:46.343073 4875 generic.go:334] "Generic (PLEG): container finished" podID="b333d616-9e20-4fcf-8c85-f3c90a6bee75" containerID="65d11b87214bbcdb19a11096312e897c3c2e56a97a48d9b483c34612eb719162" exitCode=0 Jan 30 17:12:46 crc kubenswrapper[4875]: I0130 17:12:46.343199 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-gpjtc" event={"ID":"b333d616-9e20-4fcf-8c85-f3c90a6bee75","Type":"ContainerDied","Data":"65d11b87214bbcdb19a11096312e897c3c2e56a97a48d9b483c34612eb719162"} Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.637899 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.658455 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-config-data\") pod \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.658492 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-combined-ca-bundle\") pod \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.658562 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djq7p\" (UniqueName: \"kubernetes.io/projected/b333d616-9e20-4fcf-8c85-f3c90a6bee75-kube-api-access-djq7p\") pod \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.658600 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-scripts\") pod \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.658659 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-credential-keys\") pod \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.658680 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-fernet-keys\") pod \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\" (UID: \"b333d616-9e20-4fcf-8c85-f3c90a6bee75\") " Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.689676 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b333d616-9e20-4fcf-8c85-f3c90a6bee75" (UID: "b333d616-9e20-4fcf-8c85-f3c90a6bee75"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.692389 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-scripts" (OuterVolumeSpecName: "scripts") pod "b333d616-9e20-4fcf-8c85-f3c90a6bee75" (UID: "b333d616-9e20-4fcf-8c85-f3c90a6bee75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.692625 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b333d616-9e20-4fcf-8c85-f3c90a6bee75" (UID: "b333d616-9e20-4fcf-8c85-f3c90a6bee75"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.692635 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b333d616-9e20-4fcf-8c85-f3c90a6bee75-kube-api-access-djq7p" (OuterVolumeSpecName: "kube-api-access-djq7p") pod "b333d616-9e20-4fcf-8c85-f3c90a6bee75" (UID: "b333d616-9e20-4fcf-8c85-f3c90a6bee75"). InnerVolumeSpecName "kube-api-access-djq7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.719873 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b333d616-9e20-4fcf-8c85-f3c90a6bee75" (UID: "b333d616-9e20-4fcf-8c85-f3c90a6bee75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.721310 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-config-data" (OuterVolumeSpecName: "config-data") pod "b333d616-9e20-4fcf-8c85-f3c90a6bee75" (UID: "b333d616-9e20-4fcf-8c85-f3c90a6bee75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.759855 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djq7p\" (UniqueName: \"kubernetes.io/projected/b333d616-9e20-4fcf-8c85-f3c90a6bee75-kube-api-access-djq7p\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.759886 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.759897 4875 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.759905 4875 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.759915 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:47 crc kubenswrapper[4875]: I0130 17:12:47.759923 4875 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b333d616-9e20-4fcf-8c85-f3c90a6bee75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.357500 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-gpjtc" event={"ID":"b333d616-9e20-4fcf-8c85-f3c90a6bee75","Type":"ContainerDied","Data":"3f602c2ffe63eedb70b23f72d46b4b6d0a9b1ae20eb99ff52af1464ba94b2386"} Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.357764 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f602c2ffe63eedb70b23f72d46b4b6d0a9b1ae20eb99ff52af1464ba94b2386" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.357524 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-gpjtc" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.456008 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-b6888cc46-89gfr"] Jan 30 17:12:48 crc kubenswrapper[4875]: E0130 17:12:48.456365 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b333d616-9e20-4fcf-8c85-f3c90a6bee75" containerName="keystone-bootstrap" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.456398 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="b333d616-9e20-4fcf-8c85-f3c90a6bee75" containerName="keystone-bootstrap" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.456679 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="b333d616-9e20-4fcf-8c85-f3c90a6bee75" containerName="keystone-bootstrap" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.457474 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.462755 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.463263 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.463286 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-8b6fj" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.463443 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.466039 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-b6888cc46-89gfr"] Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.470938 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-combined-ca-bundle\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.471088 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f59gv\" (UniqueName: \"kubernetes.io/projected/e95a5815-f333-496a-a3cc-e568c1ded6ba-kube-api-access-f59gv\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.471277 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-credential-keys\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.471394 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-scripts\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.471509 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-fernet-keys\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.471662 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-config-data\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.572281 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-credential-keys\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.572333 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-scripts\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.572365 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-fernet-keys\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.572406 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-config-data\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.572459 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-combined-ca-bundle\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.572483 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f59gv\" (UniqueName: \"kubernetes.io/projected/e95a5815-f333-496a-a3cc-e568c1ded6ba-kube-api-access-f59gv\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.576110 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-scripts\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.576112 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-credential-keys\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.576314 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-config-data\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.576860 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-fernet-keys\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.577940 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e95a5815-f333-496a-a3cc-e568c1ded6ba-combined-ca-bundle\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.587840 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f59gv\" (UniqueName: \"kubernetes.io/projected/e95a5815-f333-496a-a3cc-e568c1ded6ba-kube-api-access-f59gv\") pod \"keystone-b6888cc46-89gfr\" (UID: \"e95a5815-f333-496a-a3cc-e568c1ded6ba\") " pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:48 crc kubenswrapper[4875]: I0130 17:12:48.773670 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:49 crc kubenswrapper[4875]: I0130 17:12:49.284983 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-b6888cc46-89gfr"] Jan 30 17:12:49 crc kubenswrapper[4875]: I0130 17:12:49.366268 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-b6888cc46-89gfr" event={"ID":"e95a5815-f333-496a-a3cc-e568c1ded6ba","Type":"ContainerStarted","Data":"1735ddfe9b621cba4491f411fdfcaaa07c55ec25b8eb74a752b559e2f813f6fd"} Jan 30 17:12:50 crc kubenswrapper[4875]: I0130 17:12:50.374929 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-b6888cc46-89gfr" event={"ID":"e95a5815-f333-496a-a3cc-e568c1ded6ba","Type":"ContainerStarted","Data":"e54618f4b8d67cdff916b2fc183527a030530d7cd69c5602eace0ba4ca0c0cfb"} Jan 30 17:12:50 crc kubenswrapper[4875]: I0130 17:12:50.375100 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:12:50 crc kubenswrapper[4875]: I0130 17:12:50.400151 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-b6888cc46-89gfr" podStartSLOduration=2.40012844 podStartE2EDuration="2.40012844s" podCreationTimestamp="2026-01-30 17:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:12:50.393181097 +0000 UTC m=+980.940544500" watchObservedRunningTime="2026-01-30 17:12:50.40012844 +0000 UTC m=+980.947491823" Jan 30 17:13:12 crc kubenswrapper[4875]: I0130 17:13:12.765879 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:13:12 crc kubenswrapper[4875]: I0130 17:13:12.767329 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.075707 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-696447b7b-gwj9q"] Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.076870 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.092246 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-696447b7b-gwj9q"] Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.264543 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4060f6-e91b-4f67-b959-9e2a125c05d3-config-data\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.264654 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e4060f6-e91b-4f67-b959-9e2a125c05d3-logs\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.264680 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4060f6-e91b-4f67-b959-9e2a125c05d3-combined-ca-bundle\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.264729 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e4060f6-e91b-4f67-b959-9e2a125c05d3-scripts\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.264784 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scph6\" (UniqueName: \"kubernetes.io/projected/2e4060f6-e91b-4f67-b959-9e2a125c05d3-kube-api-access-scph6\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.366401 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4060f6-e91b-4f67-b959-9e2a125c05d3-config-data\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.366466 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e4060f6-e91b-4f67-b959-9e2a125c05d3-logs\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.366488 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4060f6-e91b-4f67-b959-9e2a125c05d3-combined-ca-bundle\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.366536 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e4060f6-e91b-4f67-b959-9e2a125c05d3-scripts\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.366576 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scph6\" (UniqueName: \"kubernetes.io/projected/2e4060f6-e91b-4f67-b959-9e2a125c05d3-kube-api-access-scph6\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.367153 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e4060f6-e91b-4f67-b959-9e2a125c05d3-logs\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.372285 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4060f6-e91b-4f67-b959-9e2a125c05d3-config-data\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.372892 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4060f6-e91b-4f67-b959-9e2a125c05d3-combined-ca-bundle\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.374277 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e4060f6-e91b-4f67-b959-9e2a125c05d3-scripts\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.390488 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scph6\" (UniqueName: \"kubernetes.io/projected/2e4060f6-e91b-4f67-b959-9e2a125c05d3-kube-api-access-scph6\") pod \"placement-696447b7b-gwj9q\" (UID: \"2e4060f6-e91b-4f67-b959-9e2a125c05d3\") " pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.395113 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:13 crc kubenswrapper[4875]: I0130 17:13:13.810281 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-696447b7b-gwj9q"] Jan 30 17:13:14 crc kubenswrapper[4875]: I0130 17:13:14.546018 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-696447b7b-gwj9q" event={"ID":"2e4060f6-e91b-4f67-b959-9e2a125c05d3","Type":"ContainerStarted","Data":"9853dd883f7306fae229d2a2279bee998d6503259973e0558e21d73d1c10829c"} Jan 30 17:13:14 crc kubenswrapper[4875]: I0130 17:13:14.546403 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:14 crc kubenswrapper[4875]: I0130 17:13:14.546430 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:14 crc kubenswrapper[4875]: I0130 17:13:14.546438 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-696447b7b-gwj9q" event={"ID":"2e4060f6-e91b-4f67-b959-9e2a125c05d3","Type":"ContainerStarted","Data":"e907a4b4f85e625b6dbff1f27ffc8513d9636d547ad270b5cd0609d4e86338eb"} Jan 30 17:13:14 crc kubenswrapper[4875]: I0130 17:13:14.546447 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-696447b7b-gwj9q" event={"ID":"2e4060f6-e91b-4f67-b959-9e2a125c05d3","Type":"ContainerStarted","Data":"44197aa440cf6ea2ca35cb97f0570c9413b0305fba46ae6a8a466147d319a107"} Jan 30 17:13:14 crc kubenswrapper[4875]: I0130 17:13:14.573629 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-696447b7b-gwj9q" podStartSLOduration=1.5736039609999999 podStartE2EDuration="1.573603961s" podCreationTimestamp="2026-01-30 17:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:13:14.569827489 +0000 UTC m=+1005.117190872" watchObservedRunningTime="2026-01-30 17:13:14.573603961 +0000 UTC m=+1005.120967364" Jan 30 17:13:20 crc kubenswrapper[4875]: I0130 17:13:20.177116 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/keystone-b6888cc46-89gfr" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.461949 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.464285 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.467924 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstackclient-openstackclient-dockercfg-jxtw5" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.468201 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-config-secret" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.468348 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.474175 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.662504 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.662659 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-openstack-config\") pod \"openstackclient\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.662769 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-openstack-config-secret\") pod \"openstackclient\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.662818 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzzqh\" (UniqueName: \"kubernetes.io/projected/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-kube-api-access-fzzqh\") pod \"openstackclient\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.697209 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 30 17:13:22 crc kubenswrapper[4875]: E0130 17:13:22.697933 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-fzzqh openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="nova-kuttl-default/openstackclient" podUID="cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.704303 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.723936 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.724830 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.741139 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.764741 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.764919 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-openstack-config\") pod \"openstackclient\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.765048 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-openstack-config-secret\") pod \"openstackclient\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.765097 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzzqh\" (UniqueName: \"kubernetes.io/projected/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-kube-api-access-fzzqh\") pod \"openstackclient\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.766773 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-openstack-config\") pod \"openstackclient\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: E0130 17:13:22.767849 4875 projected.go:194] Error preparing data for projected volume kube-api-access-fzzqh for pod nova-kuttl-default/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4) does not match the UID in record. The object might have been deleted and then recreated Jan 30 17:13:22 crc kubenswrapper[4875]: E0130 17:13:22.767907 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-kube-api-access-fzzqh podName:cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4 nodeName:}" failed. No retries permitted until 2026-01-30 17:13:23.267891651 +0000 UTC m=+1013.815255034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fzzqh" (UniqueName: "kubernetes.io/projected/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-kube-api-access-fzzqh") pod "openstackclient" (UID: "cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4) does not match the UID in record. The object might have been deleted and then recreated Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.775337 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.778065 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-openstack-config-secret\") pod \"openstackclient\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.866753 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f3c910-b4f4-40cf-bf87-aabb54bb76c3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.866835 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c4f3c910-b4f4-40cf-bf87-aabb54bb76c3-openstack-config\") pod \"openstackclient\" (UID: \"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.866870 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c4f3c910-b4f4-40cf-bf87-aabb54bb76c3-openstack-config-secret\") pod \"openstackclient\" (UID: \"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.866892 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7tjg\" (UniqueName: \"kubernetes.io/projected/c4f3c910-b4f4-40cf-bf87-aabb54bb76c3-kube-api-access-q7tjg\") pod \"openstackclient\" (UID: \"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.968413 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f3c910-b4f4-40cf-bf87-aabb54bb76c3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.968805 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c4f3c910-b4f4-40cf-bf87-aabb54bb76c3-openstack-config\") pod \"openstackclient\" (UID: \"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.968839 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c4f3c910-b4f4-40cf-bf87-aabb54bb76c3-openstack-config-secret\") pod \"openstackclient\" (UID: \"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.968855 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7tjg\" (UniqueName: \"kubernetes.io/projected/c4f3c910-b4f4-40cf-bf87-aabb54bb76c3-kube-api-access-q7tjg\") pod \"openstackclient\" (UID: \"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.969793 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c4f3c910-b4f4-40cf-bf87-aabb54bb76c3-openstack-config\") pod \"openstackclient\" (UID: \"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.973316 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f3c910-b4f4-40cf-bf87-aabb54bb76c3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.973334 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c4f3c910-b4f4-40cf-bf87-aabb54bb76c3-openstack-config-secret\") pod \"openstackclient\" (UID: \"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:22 crc kubenswrapper[4875]: I0130 17:13:22.989449 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7tjg\" (UniqueName: \"kubernetes.io/projected/c4f3c910-b4f4-40cf-bf87-aabb54bb76c3-kube-api-access-q7tjg\") pod \"openstackclient\" (UID: \"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.045905 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.274983 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzzqh\" (UniqueName: \"kubernetes.io/projected/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-kube-api-access-fzzqh\") pod \"openstackclient\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " pod="nova-kuttl-default/openstackclient" Jan 30 17:13:23 crc kubenswrapper[4875]: E0130 17:13:23.276798 4875 projected.go:194] Error preparing data for projected volume kube-api-access-fzzqh for pod nova-kuttl-default/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4) does not match the UID in record. The object might have been deleted and then recreated Jan 30 17:13:23 crc kubenswrapper[4875]: E0130 17:13:23.276891 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-kube-api-access-fzzqh podName:cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4 nodeName:}" failed. No retries permitted until 2026-01-30 17:13:24.276858561 +0000 UTC m=+1014.824221944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fzzqh" (UniqueName: "kubernetes.io/projected/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-kube-api-access-fzzqh") pod "openstackclient" (UID: "cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4) does not match the UID in record. The object might have been deleted and then recreated Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.490429 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.608678 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.608767 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3","Type":"ContainerStarted","Data":"f012e3775bf7d12d334b3b84d55e66511d71518082b653fef84a8de328e10d5d"} Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.611898 4875 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="nova-kuttl-default/openstackclient" oldPodUID="cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4" podUID="c4f3c910-b4f4-40cf-bf87-aabb54bb76c3" Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.621071 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.681307 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-openstack-config\") pod \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.681409 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-openstack-config-secret\") pod \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.681428 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-combined-ca-bundle\") pod \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\" (UID: \"cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4\") " Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.681664 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzzqh\" (UniqueName: \"kubernetes.io/projected/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-kube-api-access-fzzqh\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.682211 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4" (UID: "cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.685349 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4" (UID: "cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.685475 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4" (UID: "cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.783437 4875 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.783490 4875 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:23 crc kubenswrapper[4875]: I0130 17:13:23.783514 4875 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:24 crc kubenswrapper[4875]: I0130 17:13:24.152263 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4" path="/var/lib/kubelet/pods/cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4/volumes" Jan 30 17:13:24 crc kubenswrapper[4875]: I0130 17:13:24.615937 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 30 17:13:24 crc kubenswrapper[4875]: I0130 17:13:24.621136 4875 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="nova-kuttl-default/openstackclient" oldPodUID="cf2c0509-1dd9-4a0c-a13d-391bb7e66fa4" podUID="c4f3c910-b4f4-40cf-bf87-aabb54bb76c3" Jan 30 17:13:31 crc kubenswrapper[4875]: I0130 17:13:31.697962 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"c4f3c910-b4f4-40cf-bf87-aabb54bb76c3","Type":"ContainerStarted","Data":"0068202f42b47a7b25677de8437df67abe4f6bffb82635d85be1e7a9795bc6c9"} Jan 30 17:13:44 crc kubenswrapper[4875]: I0130 17:13:44.364914 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:44 crc kubenswrapper[4875]: I0130 17:13:44.392784 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstackclient" podStartSLOduration=14.486257457 podStartE2EDuration="22.392749058s" podCreationTimestamp="2026-01-30 17:13:22 +0000 UTC" firstStartedPulling="2026-01-30 17:13:23.495031456 +0000 UTC m=+1014.042394839" lastFinishedPulling="2026-01-30 17:13:31.401523057 +0000 UTC m=+1021.948886440" observedRunningTime="2026-01-30 17:13:31.725323144 +0000 UTC m=+1022.272686537" watchObservedRunningTime="2026-01-30 17:13:44.392749058 +0000 UTC m=+1034.940112471" Jan 30 17:13:44 crc kubenswrapper[4875]: I0130 17:13:44.412669 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-696447b7b-gwj9q" Jan 30 17:13:44 crc kubenswrapper[4875]: I0130 17:13:44.467739 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-5d585776fb-7z44m"] Jan 30 17:13:44 crc kubenswrapper[4875]: I0130 17:13:44.468362 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/placement-5d585776fb-7z44m" podUID="a2370b24-9afc-4626-b761-00e89f8a6b84" containerName="placement-log" containerID="cri-o://df429df0ea9794f45319f6a0e1565b428bff05814e5b22aec677c7ed70d9c5ff" gracePeriod=30 Jan 30 17:13:44 crc kubenswrapper[4875]: I0130 17:13:44.468890 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/placement-5d585776fb-7z44m" podUID="a2370b24-9afc-4626-b761-00e89f8a6b84" containerName="placement-api" containerID="cri-o://7d2643ac15756f8659d7b75d39b1e1c9f307ff226a0d06e64e4e08184b1ab421" gracePeriod=30 Jan 30 17:13:45 crc kubenswrapper[4875]: I0130 17:13:45.809184 4875 generic.go:334] "Generic (PLEG): container finished" podID="a2370b24-9afc-4626-b761-00e89f8a6b84" containerID="df429df0ea9794f45319f6a0e1565b428bff05814e5b22aec677c7ed70d9c5ff" exitCode=143 Jan 30 17:13:45 crc kubenswrapper[4875]: I0130 17:13:45.809334 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-5d585776fb-7z44m" event={"ID":"a2370b24-9afc-4626-b761-00e89f8a6b84","Type":"ContainerDied","Data":"df429df0ea9794f45319f6a0e1565b428bff05814e5b22aec677c7ed70d9c5ff"} Jan 30 17:13:51 crc kubenswrapper[4875]: I0130 17:13:51.860741 4875 generic.go:334] "Generic (PLEG): container finished" podID="a2370b24-9afc-4626-b761-00e89f8a6b84" containerID="7d2643ac15756f8659d7b75d39b1e1c9f307ff226a0d06e64e4e08184b1ab421" exitCode=0 Jan 30 17:13:51 crc kubenswrapper[4875]: I0130 17:13:51.860891 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-5d585776fb-7z44m" event={"ID":"a2370b24-9afc-4626-b761-00e89f8a6b84","Type":"ContainerDied","Data":"7d2643ac15756f8659d7b75d39b1e1c9f307ff226a0d06e64e4e08184b1ab421"} Jan 30 17:13:52 crc kubenswrapper[4875]: I0130 17:13:52.956213 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.053698 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-scripts\") pod \"a2370b24-9afc-4626-b761-00e89f8a6b84\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.053786 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2370b24-9afc-4626-b761-00e89f8a6b84-logs\") pod \"a2370b24-9afc-4626-b761-00e89f8a6b84\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.053818 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-config-data\") pod \"a2370b24-9afc-4626-b761-00e89f8a6b84\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.054079 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-combined-ca-bundle\") pod \"a2370b24-9afc-4626-b761-00e89f8a6b84\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.054218 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx4hb\" (UniqueName: \"kubernetes.io/projected/a2370b24-9afc-4626-b761-00e89f8a6b84-kube-api-access-dx4hb\") pod \"a2370b24-9afc-4626-b761-00e89f8a6b84\" (UID: \"a2370b24-9afc-4626-b761-00e89f8a6b84\") " Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.055443 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2370b24-9afc-4626-b761-00e89f8a6b84-logs" (OuterVolumeSpecName: "logs") pod "a2370b24-9afc-4626-b761-00e89f8a6b84" (UID: "a2370b24-9afc-4626-b761-00e89f8a6b84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.059915 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-scripts" (OuterVolumeSpecName: "scripts") pod "a2370b24-9afc-4626-b761-00e89f8a6b84" (UID: "a2370b24-9afc-4626-b761-00e89f8a6b84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.061516 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2370b24-9afc-4626-b761-00e89f8a6b84-kube-api-access-dx4hb" (OuterVolumeSpecName: "kube-api-access-dx4hb") pod "a2370b24-9afc-4626-b761-00e89f8a6b84" (UID: "a2370b24-9afc-4626-b761-00e89f8a6b84"). InnerVolumeSpecName "kube-api-access-dx4hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.103796 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2370b24-9afc-4626-b761-00e89f8a6b84" (UID: "a2370b24-9afc-4626-b761-00e89f8a6b84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.110259 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-config-data" (OuterVolumeSpecName: "config-data") pod "a2370b24-9afc-4626-b761-00e89f8a6b84" (UID: "a2370b24-9afc-4626-b761-00e89f8a6b84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.155357 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.155389 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2370b24-9afc-4626-b761-00e89f8a6b84-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.155397 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.155409 4875 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2370b24-9afc-4626-b761-00e89f8a6b84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.155420 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx4hb\" (UniqueName: \"kubernetes.io/projected/a2370b24-9afc-4626-b761-00e89f8a6b84-kube-api-access-dx4hb\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.881784 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-5d585776fb-7z44m" event={"ID":"a2370b24-9afc-4626-b761-00e89f8a6b84","Type":"ContainerDied","Data":"4b29b3dd9990a63059e55ab02a5d1c65f0c70d0d9856f36ba7a9b45a66a72d02"} Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.881843 4875 scope.go:117] "RemoveContainer" containerID="7d2643ac15756f8659d7b75d39b1e1c9f307ff226a0d06e64e4e08184b1ab421" Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.881897 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-5d585776fb-7z44m" Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.912369 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-5d585776fb-7z44m"] Jan 30 17:13:53 crc kubenswrapper[4875]: I0130 17:13:53.920486 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-5d585776fb-7z44m"] Jan 30 17:13:54 crc kubenswrapper[4875]: I0130 17:13:54.144651 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2370b24-9afc-4626-b761-00e89f8a6b84" path="/var/lib/kubelet/pods/a2370b24-9afc-4626-b761-00e89f8a6b84/volumes" Jan 30 17:13:57 crc kubenswrapper[4875]: I0130 17:13:57.829078 4875 scope.go:117] "RemoveContainer" containerID="df429df0ea9794f45319f6a0e1565b428bff05814e5b22aec677c7ed70d9c5ff" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.165335 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r"] Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.166050 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" podUID="b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d" containerName="operator" containerID="cri-o://6c36d432ebf9367d608513184b4805d9a3e1655fd4d5ee543724ccba99e8b017" gracePeriod=10 Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.315691 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5c487c8746-9msld"] Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.315934 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" podUID="bbef4553-54c5-4fcb-9868-49c67b9420b5" containerName="manager" containerID="cri-o://4235275f9be82d0a6f0f012a96bcf0afe01ab85652f997b514716e94b502ade4" gracePeriod=10 Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.599160 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.701159 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6lhq\" (UniqueName: \"kubernetes.io/projected/b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d-kube-api-access-d6lhq\") pod \"b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d\" (UID: \"b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d\") " Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.711166 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d-kube-api-access-d6lhq" (OuterVolumeSpecName: "kube-api-access-d6lhq") pod "b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d" (UID: "b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d"). InnerVolumeSpecName "kube-api-access-d6lhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.750864 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.772876 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-dk6fs"] Jan 30 17:14:13 crc kubenswrapper[4875]: E0130 17:14:13.773164 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2370b24-9afc-4626-b761-00e89f8a6b84" containerName="placement-log" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.773180 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2370b24-9afc-4626-b761-00e89f8a6b84" containerName="placement-log" Jan 30 17:14:13 crc kubenswrapper[4875]: E0130 17:14:13.773197 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbef4553-54c5-4fcb-9868-49c67b9420b5" containerName="manager" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.773203 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbef4553-54c5-4fcb-9868-49c67b9420b5" containerName="manager" Jan 30 17:14:13 crc kubenswrapper[4875]: E0130 17:14:13.773213 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d" containerName="operator" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.773219 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d" containerName="operator" Jan 30 17:14:13 crc kubenswrapper[4875]: E0130 17:14:13.773234 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2370b24-9afc-4626-b761-00e89f8a6b84" containerName="placement-api" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.773240 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2370b24-9afc-4626-b761-00e89f8a6b84" containerName="placement-api" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.773356 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbef4553-54c5-4fcb-9868-49c67b9420b5" containerName="manager" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.773369 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d" containerName="operator" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.773376 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2370b24-9afc-4626-b761-00e89f8a6b84" containerName="placement-log" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.773387 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2370b24-9afc-4626-b761-00e89f8a6b84" containerName="placement-api" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.773939 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-dk6fs" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.803126 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6lhq\" (UniqueName: \"kubernetes.io/projected/b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d-kube-api-access-d6lhq\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.844085 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-dk6fs"] Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.884234 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-index-dockercfg-wjsfx" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.904352 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j27qn\" (UniqueName: \"kubernetes.io/projected/bbef4553-54c5-4fcb-9868-49c67b9420b5-kube-api-access-j27qn\") pod \"bbef4553-54c5-4fcb-9868-49c67b9420b5\" (UID: \"bbef4553-54c5-4fcb-9868-49c67b9420b5\") " Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.904752 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhplw\" (UniqueName: \"kubernetes.io/projected/92a3bc1c-99ff-485b-a2b2-6f838508f5bb-kube-api-access-bhplw\") pod \"nova-operator-index-dk6fs\" (UID: \"92a3bc1c-99ff-485b-a2b2-6f838508f5bb\") " pod="openstack-operators/nova-operator-index-dk6fs" Jan 30 17:14:13 crc kubenswrapper[4875]: I0130 17:14:13.907914 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbef4553-54c5-4fcb-9868-49c67b9420b5-kube-api-access-j27qn" (OuterVolumeSpecName: "kube-api-access-j27qn") pod "bbef4553-54c5-4fcb-9868-49c67b9420b5" (UID: "bbef4553-54c5-4fcb-9868-49c67b9420b5"). InnerVolumeSpecName "kube-api-access-j27qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.005981 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhplw\" (UniqueName: \"kubernetes.io/projected/92a3bc1c-99ff-485b-a2b2-6f838508f5bb-kube-api-access-bhplw\") pod \"nova-operator-index-dk6fs\" (UID: \"92a3bc1c-99ff-485b-a2b2-6f838508f5bb\") " pod="openstack-operators/nova-operator-index-dk6fs" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.006101 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j27qn\" (UniqueName: \"kubernetes.io/projected/bbef4553-54c5-4fcb-9868-49c67b9420b5-kube-api-access-j27qn\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.039286 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhplw\" (UniqueName: \"kubernetes.io/projected/92a3bc1c-99ff-485b-a2b2-6f838508f5bb-kube-api-access-bhplw\") pod \"nova-operator-index-dk6fs\" (UID: \"92a3bc1c-99ff-485b-a2b2-6f838508f5bb\") " pod="openstack-operators/nova-operator-index-dk6fs" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.060157 4875 generic.go:334] "Generic (PLEG): container finished" podID="b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d" containerID="6c36d432ebf9367d608513184b4805d9a3e1655fd4d5ee543724ccba99e8b017" exitCode=0 Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.060237 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" event={"ID":"b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d","Type":"ContainerDied","Data":"6c36d432ebf9367d608513184b4805d9a3e1655fd4d5ee543724ccba99e8b017"} Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.060267 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.060288 4875 scope.go:117] "RemoveContainer" containerID="6c36d432ebf9367d608513184b4805d9a3e1655fd4d5ee543724ccba99e8b017" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.060277 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r" event={"ID":"b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d","Type":"ContainerDied","Data":"f3321421085f46cebae966c3e4eda008045c82586a6f1eb79f7f1389009c70e8"} Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.061614 4875 generic.go:334] "Generic (PLEG): container finished" podID="bbef4553-54c5-4fcb-9868-49c67b9420b5" containerID="4235275f9be82d0a6f0f012a96bcf0afe01ab85652f997b514716e94b502ade4" exitCode=0 Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.061638 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" event={"ID":"bbef4553-54c5-4fcb-9868-49c67b9420b5","Type":"ContainerDied","Data":"4235275f9be82d0a6f0f012a96bcf0afe01ab85652f997b514716e94b502ade4"} Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.061654 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" event={"ID":"bbef4553-54c5-4fcb-9868-49c67b9420b5","Type":"ContainerDied","Data":"0fed439d60a3f680db875f1935c7769140ea294ddfd35ee7ea0fe26fe672ed46"} Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.061676 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5c487c8746-9msld" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.085734 4875 scope.go:117] "RemoveContainer" containerID="6c36d432ebf9367d608513184b4805d9a3e1655fd4d5ee543724ccba99e8b017" Jan 30 17:14:14 crc kubenswrapper[4875]: E0130 17:14:14.086260 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c36d432ebf9367d608513184b4805d9a3e1655fd4d5ee543724ccba99e8b017\": container with ID starting with 6c36d432ebf9367d608513184b4805d9a3e1655fd4d5ee543724ccba99e8b017 not found: ID does not exist" containerID="6c36d432ebf9367d608513184b4805d9a3e1655fd4d5ee543724ccba99e8b017" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.086293 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c36d432ebf9367d608513184b4805d9a3e1655fd4d5ee543724ccba99e8b017"} err="failed to get container status \"6c36d432ebf9367d608513184b4805d9a3e1655fd4d5ee543724ccba99e8b017\": rpc error: code = NotFound desc = could not find container \"6c36d432ebf9367d608513184b4805d9a3e1655fd4d5ee543724ccba99e8b017\": container with ID starting with 6c36d432ebf9367d608513184b4805d9a3e1655fd4d5ee543724ccba99e8b017 not found: ID does not exist" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.086315 4875 scope.go:117] "RemoveContainer" containerID="4235275f9be82d0a6f0f012a96bcf0afe01ab85652f997b514716e94b502ade4" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.091357 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-dk6fs" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.165834 4875 scope.go:117] "RemoveContainer" containerID="4235275f9be82d0a6f0f012a96bcf0afe01ab85652f997b514716e94b502ade4" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.166975 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r"] Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.173647 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-64d87976dc-xvd5r"] Jan 30 17:14:14 crc kubenswrapper[4875]: E0130 17:14:14.175177 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4235275f9be82d0a6f0f012a96bcf0afe01ab85652f997b514716e94b502ade4\": container with ID starting with 4235275f9be82d0a6f0f012a96bcf0afe01ab85652f997b514716e94b502ade4 not found: ID does not exist" containerID="4235275f9be82d0a6f0f012a96bcf0afe01ab85652f997b514716e94b502ade4" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.175218 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4235275f9be82d0a6f0f012a96bcf0afe01ab85652f997b514716e94b502ade4"} err="failed to get container status \"4235275f9be82d0a6f0f012a96bcf0afe01ab85652f997b514716e94b502ade4\": rpc error: code = NotFound desc = could not find container \"4235275f9be82d0a6f0f012a96bcf0afe01ab85652f997b514716e94b502ade4\": container with ID starting with 4235275f9be82d0a6f0f012a96bcf0afe01ab85652f997b514716e94b502ade4 not found: ID does not exist" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.185866 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5c487c8746-9msld"] Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.195443 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5c487c8746-9msld"] Jan 30 17:14:14 crc kubenswrapper[4875]: E0130 17:14:14.211203 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbef4553_54c5_4fcb_9868_49c67b9420b5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb04ddcc2_175d_48b8_85a0_abf6c2d2aa7d.slice/crio-f3321421085f46cebae966c3e4eda008045c82586a6f1eb79f7f1389009c70e8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbef4553_54c5_4fcb_9868_49c67b9420b5.slice/crio-0fed439d60a3f680db875f1935c7769140ea294ddfd35ee7ea0fe26fe672ed46\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb04ddcc2_175d_48b8_85a0_abf6c2d2aa7d.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:14:14 crc kubenswrapper[4875]: I0130 17:14:14.592113 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-dk6fs"] Jan 30 17:14:15 crc kubenswrapper[4875]: I0130 17:14:15.070934 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-dk6fs" event={"ID":"92a3bc1c-99ff-485b-a2b2-6f838508f5bb","Type":"ContainerStarted","Data":"b4926b1ffd159bb4710637559381e8edfa9e35d4cdc2622863aab9e9c5e864e3"} Jan 30 17:14:15 crc kubenswrapper[4875]: I0130 17:14:15.971483 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-dk6fs"] Jan 30 17:14:16 crc kubenswrapper[4875]: I0130 17:14:16.146765 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d" path="/var/lib/kubelet/pods/b04ddcc2-175d-48b8-85a0-abf6c2d2aa7d/volumes" Jan 30 17:14:16 crc kubenswrapper[4875]: I0130 17:14:16.147517 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbef4553-54c5-4fcb-9868-49c67b9420b5" path="/var/lib/kubelet/pods/bbef4553-54c5-4fcb-9868-49c67b9420b5/volumes" Jan 30 17:14:16 crc kubenswrapper[4875]: I0130 17:14:16.383838 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-pd8bb"] Jan 30 17:14:16 crc kubenswrapper[4875]: I0130 17:14:16.384674 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-pd8bb"] Jan 30 17:14:16 crc kubenswrapper[4875]: I0130 17:14:16.384763 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-pd8bb" Jan 30 17:14:16 crc kubenswrapper[4875]: I0130 17:14:16.577888 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvqm9\" (UniqueName: \"kubernetes.io/projected/87fba6ee-2538-48b8-8a3d-cdd9308305a6-kube-api-access-qvqm9\") pod \"nova-operator-index-pd8bb\" (UID: \"87fba6ee-2538-48b8-8a3d-cdd9308305a6\") " pod="openstack-operators/nova-operator-index-pd8bb" Jan 30 17:14:16 crc kubenswrapper[4875]: I0130 17:14:16.679274 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvqm9\" (UniqueName: \"kubernetes.io/projected/87fba6ee-2538-48b8-8a3d-cdd9308305a6-kube-api-access-qvqm9\") pod \"nova-operator-index-pd8bb\" (UID: \"87fba6ee-2538-48b8-8a3d-cdd9308305a6\") " pod="openstack-operators/nova-operator-index-pd8bb" Jan 30 17:14:16 crc kubenswrapper[4875]: I0130 17:14:16.700243 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvqm9\" (UniqueName: \"kubernetes.io/projected/87fba6ee-2538-48b8-8a3d-cdd9308305a6-kube-api-access-qvqm9\") pod \"nova-operator-index-pd8bb\" (UID: \"87fba6ee-2538-48b8-8a3d-cdd9308305a6\") " pod="openstack-operators/nova-operator-index-pd8bb" Jan 30 17:14:16 crc kubenswrapper[4875]: I0130 17:14:16.909732 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-pd8bb" Jan 30 17:14:17 crc kubenswrapper[4875]: I0130 17:14:17.097515 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-dk6fs" event={"ID":"92a3bc1c-99ff-485b-a2b2-6f838508f5bb","Type":"ContainerStarted","Data":"2140764bd6054cb6399ae69a139a9ef2a5dd0b2a740d55547c6e03c90f224931"} Jan 30 17:14:17 crc kubenswrapper[4875]: I0130 17:14:17.097824 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-index-dk6fs" podUID="92a3bc1c-99ff-485b-a2b2-6f838508f5bb" containerName="registry-server" containerID="cri-o://2140764bd6054cb6399ae69a139a9ef2a5dd0b2a740d55547c6e03c90f224931" gracePeriod=2 Jan 30 17:14:17 crc kubenswrapper[4875]: I0130 17:14:17.126571 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-dk6fs" podStartSLOduration=2.322018713 podStartE2EDuration="4.126550204s" podCreationTimestamp="2026-01-30 17:14:13 +0000 UTC" firstStartedPulling="2026-01-30 17:14:14.602827506 +0000 UTC m=+1065.150190889" lastFinishedPulling="2026-01-30 17:14:16.407358997 +0000 UTC m=+1066.954722380" observedRunningTime="2026-01-30 17:14:17.126507532 +0000 UTC m=+1067.673870915" watchObservedRunningTime="2026-01-30 17:14:17.126550204 +0000 UTC m=+1067.673913587" Jan 30 17:14:17 crc kubenswrapper[4875]: I0130 17:14:17.340132 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-pd8bb"] Jan 30 17:14:17 crc kubenswrapper[4875]: W0130 17:14:17.342724 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87fba6ee_2538_48b8_8a3d_cdd9308305a6.slice/crio-099224e789e52eee895b6c7b9a4daad28050ad6a334e4bd117ff39b5ab751560 WatchSource:0}: Error finding container 099224e789e52eee895b6c7b9a4daad28050ad6a334e4bd117ff39b5ab751560: Status 404 returned error can't find the container with id 099224e789e52eee895b6c7b9a4daad28050ad6a334e4bd117ff39b5ab751560 Jan 30 17:14:18 crc kubenswrapper[4875]: I0130 17:14:18.104800 4875 generic.go:334] "Generic (PLEG): container finished" podID="92a3bc1c-99ff-485b-a2b2-6f838508f5bb" containerID="2140764bd6054cb6399ae69a139a9ef2a5dd0b2a740d55547c6e03c90f224931" exitCode=0 Jan 30 17:14:18 crc kubenswrapper[4875]: I0130 17:14:18.104868 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-dk6fs" event={"ID":"92a3bc1c-99ff-485b-a2b2-6f838508f5bb","Type":"ContainerDied","Data":"2140764bd6054cb6399ae69a139a9ef2a5dd0b2a740d55547c6e03c90f224931"} Jan 30 17:14:18 crc kubenswrapper[4875]: I0130 17:14:18.105062 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-dk6fs" event={"ID":"92a3bc1c-99ff-485b-a2b2-6f838508f5bb","Type":"ContainerDied","Data":"b4926b1ffd159bb4710637559381e8edfa9e35d4cdc2622863aab9e9c5e864e3"} Jan 30 17:14:18 crc kubenswrapper[4875]: I0130 17:14:18.105077 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4926b1ffd159bb4710637559381e8edfa9e35d4cdc2622863aab9e9c5e864e3" Jan 30 17:14:18 crc kubenswrapper[4875]: I0130 17:14:18.106472 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-pd8bb" event={"ID":"87fba6ee-2538-48b8-8a3d-cdd9308305a6","Type":"ContainerStarted","Data":"185ade1c15c3a802f9ab0cc9df09e3b7a0b4b62a2ce7fca6e7b94e1742b996f2"} Jan 30 17:14:18 crc kubenswrapper[4875]: I0130 17:14:18.106514 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-pd8bb" event={"ID":"87fba6ee-2538-48b8-8a3d-cdd9308305a6","Type":"ContainerStarted","Data":"099224e789e52eee895b6c7b9a4daad28050ad6a334e4bd117ff39b5ab751560"} Jan 30 17:14:18 crc kubenswrapper[4875]: I0130 17:14:18.109421 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-dk6fs" Jan 30 17:14:18 crc kubenswrapper[4875]: I0130 17:14:18.125870 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-pd8bb" podStartSLOduration=1.816726356 podStartE2EDuration="2.125851711s" podCreationTimestamp="2026-01-30 17:14:16 +0000 UTC" firstStartedPulling="2026-01-30 17:14:17.346592104 +0000 UTC m=+1067.893955487" lastFinishedPulling="2026-01-30 17:14:17.655717459 +0000 UTC m=+1068.203080842" observedRunningTime="2026-01-30 17:14:18.120698311 +0000 UTC m=+1068.668061694" watchObservedRunningTime="2026-01-30 17:14:18.125851711 +0000 UTC m=+1068.673215094" Jan 30 17:14:18 crc kubenswrapper[4875]: I0130 17:14:18.302642 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhplw\" (UniqueName: \"kubernetes.io/projected/92a3bc1c-99ff-485b-a2b2-6f838508f5bb-kube-api-access-bhplw\") pod \"92a3bc1c-99ff-485b-a2b2-6f838508f5bb\" (UID: \"92a3bc1c-99ff-485b-a2b2-6f838508f5bb\") " Jan 30 17:14:18 crc kubenswrapper[4875]: I0130 17:14:18.309509 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a3bc1c-99ff-485b-a2b2-6f838508f5bb-kube-api-access-bhplw" (OuterVolumeSpecName: "kube-api-access-bhplw") pod "92a3bc1c-99ff-485b-a2b2-6f838508f5bb" (UID: "92a3bc1c-99ff-485b-a2b2-6f838508f5bb"). InnerVolumeSpecName "kube-api-access-bhplw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:18 crc kubenswrapper[4875]: I0130 17:14:18.404371 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhplw\" (UniqueName: \"kubernetes.io/projected/92a3bc1c-99ff-485b-a2b2-6f838508f5bb-kube-api-access-bhplw\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:19 crc kubenswrapper[4875]: I0130 17:14:19.112742 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-dk6fs" Jan 30 17:14:19 crc kubenswrapper[4875]: I0130 17:14:19.146227 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-dk6fs"] Jan 30 17:14:19 crc kubenswrapper[4875]: I0130 17:14:19.154744 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-index-dk6fs"] Jan 30 17:14:20 crc kubenswrapper[4875]: I0130 17:14:20.150109 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92a3bc1c-99ff-485b-a2b2-6f838508f5bb" path="/var/lib/kubelet/pods/92a3bc1c-99ff-485b-a2b2-6f838508f5bb/volumes" Jan 30 17:14:26 crc kubenswrapper[4875]: I0130 17:14:26.287295 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:14:26 crc kubenswrapper[4875]: I0130 17:14:26.287938 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:14:26 crc kubenswrapper[4875]: I0130 17:14:26.910702 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-index-pd8bb" Jan 30 17:14:26 crc kubenswrapper[4875]: I0130 17:14:26.911029 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/nova-operator-index-pd8bb" Jan 30 17:14:26 crc kubenswrapper[4875]: I0130 17:14:26.945382 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/nova-operator-index-pd8bb" Jan 30 17:14:27 crc kubenswrapper[4875]: I0130 17:14:27.198162 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-index-pd8bb" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.633815 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n"] Jan 30 17:14:34 crc kubenswrapper[4875]: E0130 17:14:34.634714 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a3bc1c-99ff-485b-a2b2-6f838508f5bb" containerName="registry-server" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.634731 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a3bc1c-99ff-485b-a2b2-6f838508f5bb" containerName="registry-server" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.634915 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="92a3bc1c-99ff-485b-a2b2-6f838508f5bb" containerName="registry-server" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.636193 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.639260 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-g8j98" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.743298 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n"] Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.753824 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7390a607-60b7-4f18-af7a-b4391c97a01f-util\") pod \"3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n\" (UID: \"7390a607-60b7-4f18-af7a-b4391c97a01f\") " pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.753875 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6q72\" (UniqueName: \"kubernetes.io/projected/7390a607-60b7-4f18-af7a-b4391c97a01f-kube-api-access-n6q72\") pod \"3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n\" (UID: \"7390a607-60b7-4f18-af7a-b4391c97a01f\") " pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.753909 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7390a607-60b7-4f18-af7a-b4391c97a01f-bundle\") pod \"3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n\" (UID: \"7390a607-60b7-4f18-af7a-b4391c97a01f\") " pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.854836 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7390a607-60b7-4f18-af7a-b4391c97a01f-util\") pod \"3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n\" (UID: \"7390a607-60b7-4f18-af7a-b4391c97a01f\") " pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.854886 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6q72\" (UniqueName: \"kubernetes.io/projected/7390a607-60b7-4f18-af7a-b4391c97a01f-kube-api-access-n6q72\") pod \"3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n\" (UID: \"7390a607-60b7-4f18-af7a-b4391c97a01f\") " pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.854919 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7390a607-60b7-4f18-af7a-b4391c97a01f-bundle\") pod \"3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n\" (UID: \"7390a607-60b7-4f18-af7a-b4391c97a01f\") " pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.855512 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7390a607-60b7-4f18-af7a-b4391c97a01f-bundle\") pod \"3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n\" (UID: \"7390a607-60b7-4f18-af7a-b4391c97a01f\") " pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.855520 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7390a607-60b7-4f18-af7a-b4391c97a01f-util\") pod \"3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n\" (UID: \"7390a607-60b7-4f18-af7a-b4391c97a01f\") " pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.872698 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6q72\" (UniqueName: \"kubernetes.io/projected/7390a607-60b7-4f18-af7a-b4391c97a01f-kube-api-access-n6q72\") pod \"3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n\" (UID: \"7390a607-60b7-4f18-af7a-b4391c97a01f\") " pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" Jan 30 17:14:34 crc kubenswrapper[4875]: I0130 17:14:34.959182 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" Jan 30 17:14:35 crc kubenswrapper[4875]: I0130 17:14:35.399703 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n"] Jan 30 17:14:36 crc kubenswrapper[4875]: I0130 17:14:36.232722 4875 generic.go:334] "Generic (PLEG): container finished" podID="7390a607-60b7-4f18-af7a-b4391c97a01f" containerID="ac16c9992ca52dc5da9e599b7683dbef192b19e2487297d8765ecd1f2b6ff2d5" exitCode=0 Jan 30 17:14:36 crc kubenswrapper[4875]: I0130 17:14:36.232815 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" event={"ID":"7390a607-60b7-4f18-af7a-b4391c97a01f","Type":"ContainerDied","Data":"ac16c9992ca52dc5da9e599b7683dbef192b19e2487297d8765ecd1f2b6ff2d5"} Jan 30 17:14:36 crc kubenswrapper[4875]: I0130 17:14:36.233062 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" event={"ID":"7390a607-60b7-4f18-af7a-b4391c97a01f","Type":"ContainerStarted","Data":"d969bcaf9b3a7a4c6a73b4b15488c6ea9483d56abea959f5b0256183b0a9c094"} Jan 30 17:14:38 crc kubenswrapper[4875]: I0130 17:14:38.248325 4875 generic.go:334] "Generic (PLEG): container finished" podID="7390a607-60b7-4f18-af7a-b4391c97a01f" containerID="9d0769cef7116c1c57e3ceb6d7d0dec3e2821c8e4627388a5d43d99ad69f544e" exitCode=0 Jan 30 17:14:38 crc kubenswrapper[4875]: I0130 17:14:38.248535 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" event={"ID":"7390a607-60b7-4f18-af7a-b4391c97a01f","Type":"ContainerDied","Data":"9d0769cef7116c1c57e3ceb6d7d0dec3e2821c8e4627388a5d43d99ad69f544e"} Jan 30 17:14:39 crc kubenswrapper[4875]: I0130 17:14:39.259785 4875 generic.go:334] "Generic (PLEG): container finished" podID="7390a607-60b7-4f18-af7a-b4391c97a01f" containerID="97c68a46b4ae50c044c72c595ec3694a8b071ee35558e5b053a3112a1d33d358" exitCode=0 Jan 30 17:14:39 crc kubenswrapper[4875]: I0130 17:14:39.259824 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" event={"ID":"7390a607-60b7-4f18-af7a-b4391c97a01f","Type":"ContainerDied","Data":"97c68a46b4ae50c044c72c595ec3694a8b071ee35558e5b053a3112a1d33d358"} Jan 30 17:14:40 crc kubenswrapper[4875]: I0130 17:14:40.605680 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" Jan 30 17:14:40 crc kubenswrapper[4875]: I0130 17:14:40.644055 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7390a607-60b7-4f18-af7a-b4391c97a01f-bundle\") pod \"7390a607-60b7-4f18-af7a-b4391c97a01f\" (UID: \"7390a607-60b7-4f18-af7a-b4391c97a01f\") " Jan 30 17:14:40 crc kubenswrapper[4875]: I0130 17:14:40.644306 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7390a607-60b7-4f18-af7a-b4391c97a01f-util\") pod \"7390a607-60b7-4f18-af7a-b4391c97a01f\" (UID: \"7390a607-60b7-4f18-af7a-b4391c97a01f\") " Jan 30 17:14:40 crc kubenswrapper[4875]: I0130 17:14:40.644343 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6q72\" (UniqueName: \"kubernetes.io/projected/7390a607-60b7-4f18-af7a-b4391c97a01f-kube-api-access-n6q72\") pod \"7390a607-60b7-4f18-af7a-b4391c97a01f\" (UID: \"7390a607-60b7-4f18-af7a-b4391c97a01f\") " Jan 30 17:14:40 crc kubenswrapper[4875]: I0130 17:14:40.646217 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7390a607-60b7-4f18-af7a-b4391c97a01f-bundle" (OuterVolumeSpecName: "bundle") pod "7390a607-60b7-4f18-af7a-b4391c97a01f" (UID: "7390a607-60b7-4f18-af7a-b4391c97a01f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:14:40 crc kubenswrapper[4875]: I0130 17:14:40.650607 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7390a607-60b7-4f18-af7a-b4391c97a01f-kube-api-access-n6q72" (OuterVolumeSpecName: "kube-api-access-n6q72") pod "7390a607-60b7-4f18-af7a-b4391c97a01f" (UID: "7390a607-60b7-4f18-af7a-b4391c97a01f"). InnerVolumeSpecName "kube-api-access-n6q72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:40 crc kubenswrapper[4875]: I0130 17:14:40.657391 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7390a607-60b7-4f18-af7a-b4391c97a01f-util" (OuterVolumeSpecName: "util") pod "7390a607-60b7-4f18-af7a-b4391c97a01f" (UID: "7390a607-60b7-4f18-af7a-b4391c97a01f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:14:40 crc kubenswrapper[4875]: I0130 17:14:40.747405 4875 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7390a607-60b7-4f18-af7a-b4391c97a01f-util\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:40 crc kubenswrapper[4875]: I0130 17:14:40.747830 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6q72\" (UniqueName: \"kubernetes.io/projected/7390a607-60b7-4f18-af7a-b4391c97a01f-kube-api-access-n6q72\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:40 crc kubenswrapper[4875]: I0130 17:14:40.747842 4875 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7390a607-60b7-4f18-af7a-b4391c97a01f-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:41 crc kubenswrapper[4875]: I0130 17:14:41.276600 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" event={"ID":"7390a607-60b7-4f18-af7a-b4391c97a01f","Type":"ContainerDied","Data":"d969bcaf9b3a7a4c6a73b4b15488c6ea9483d56abea959f5b0256183b0a9c094"} Jan 30 17:14:41 crc kubenswrapper[4875]: I0130 17:14:41.276644 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n" Jan 30 17:14:41 crc kubenswrapper[4875]: I0130 17:14:41.276673 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d969bcaf9b3a7a4c6a73b4b15488c6ea9483d56abea959f5b0256183b0a9c094" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.554124 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69"] Jan 30 17:14:46 crc kubenswrapper[4875]: E0130 17:14:46.554854 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7390a607-60b7-4f18-af7a-b4391c97a01f" containerName="extract" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.554865 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="7390a607-60b7-4f18-af7a-b4391c97a01f" containerName="extract" Jan 30 17:14:46 crc kubenswrapper[4875]: E0130 17:14:46.554879 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7390a607-60b7-4f18-af7a-b4391c97a01f" containerName="pull" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.554885 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="7390a607-60b7-4f18-af7a-b4391c97a01f" containerName="pull" Jan 30 17:14:46 crc kubenswrapper[4875]: E0130 17:14:46.554898 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7390a607-60b7-4f18-af7a-b4391c97a01f" containerName="util" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.554904 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="7390a607-60b7-4f18-af7a-b4391c97a01f" containerName="util" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.555035 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="7390a607-60b7-4f18-af7a-b4391c97a01f" containerName="extract" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.555515 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.557373 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-service-cert" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.559819 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-rr67l" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.565019 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69"] Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.744219 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg9zw\" (UniqueName: \"kubernetes.io/projected/bdc3f51f-4dc1-45bd-b26d-1cacf01f9097-kube-api-access-gg9zw\") pod \"nova-operator-controller-manager-64bd9bf7b6-llx69\" (UID: \"bdc3f51f-4dc1-45bd-b26d-1cacf01f9097\") " pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.744278 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdc3f51f-4dc1-45bd-b26d-1cacf01f9097-webhook-cert\") pod \"nova-operator-controller-manager-64bd9bf7b6-llx69\" (UID: \"bdc3f51f-4dc1-45bd-b26d-1cacf01f9097\") " pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.744484 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdc3f51f-4dc1-45bd-b26d-1cacf01f9097-apiservice-cert\") pod \"nova-operator-controller-manager-64bd9bf7b6-llx69\" (UID: \"bdc3f51f-4dc1-45bd-b26d-1cacf01f9097\") " pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.846188 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg9zw\" (UniqueName: \"kubernetes.io/projected/bdc3f51f-4dc1-45bd-b26d-1cacf01f9097-kube-api-access-gg9zw\") pod \"nova-operator-controller-manager-64bd9bf7b6-llx69\" (UID: \"bdc3f51f-4dc1-45bd-b26d-1cacf01f9097\") " pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.846250 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdc3f51f-4dc1-45bd-b26d-1cacf01f9097-webhook-cert\") pod \"nova-operator-controller-manager-64bd9bf7b6-llx69\" (UID: \"bdc3f51f-4dc1-45bd-b26d-1cacf01f9097\") " pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.846314 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdc3f51f-4dc1-45bd-b26d-1cacf01f9097-apiservice-cert\") pod \"nova-operator-controller-manager-64bd9bf7b6-llx69\" (UID: \"bdc3f51f-4dc1-45bd-b26d-1cacf01f9097\") " pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.851807 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdc3f51f-4dc1-45bd-b26d-1cacf01f9097-apiservice-cert\") pod \"nova-operator-controller-manager-64bd9bf7b6-llx69\" (UID: \"bdc3f51f-4dc1-45bd-b26d-1cacf01f9097\") " pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.851865 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdc3f51f-4dc1-45bd-b26d-1cacf01f9097-webhook-cert\") pod \"nova-operator-controller-manager-64bd9bf7b6-llx69\" (UID: \"bdc3f51f-4dc1-45bd-b26d-1cacf01f9097\") " pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.865469 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg9zw\" (UniqueName: \"kubernetes.io/projected/bdc3f51f-4dc1-45bd-b26d-1cacf01f9097-kube-api-access-gg9zw\") pod \"nova-operator-controller-manager-64bd9bf7b6-llx69\" (UID: \"bdc3f51f-4dc1-45bd-b26d-1cacf01f9097\") " pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" Jan 30 17:14:46 crc kubenswrapper[4875]: I0130 17:14:46.928533 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" Jan 30 17:14:47 crc kubenswrapper[4875]: I0130 17:14:47.365959 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69"] Jan 30 17:14:48 crc kubenswrapper[4875]: I0130 17:14:48.331771 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" event={"ID":"bdc3f51f-4dc1-45bd-b26d-1cacf01f9097","Type":"ContainerStarted","Data":"358100831dd61ac3434a99d9708fb5860ca17435b20a7d841a23768d394d0dd2"} Jan 30 17:14:48 crc kubenswrapper[4875]: I0130 17:14:48.332053 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" Jan 30 17:14:48 crc kubenswrapper[4875]: I0130 17:14:48.332064 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" event={"ID":"bdc3f51f-4dc1-45bd-b26d-1cacf01f9097","Type":"ContainerStarted","Data":"424830b3062e95b4ec3652798d9f4e298b48890a7cbd208cb1c2deb4e5fbe1c2"} Jan 30 17:14:48 crc kubenswrapper[4875]: I0130 17:14:48.363728 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" podStartSLOduration=2.36370821 podStartE2EDuration="2.36370821s" podCreationTimestamp="2026-01-30 17:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:14:48.361194673 +0000 UTC m=+1098.908558066" watchObservedRunningTime="2026-01-30 17:14:48.36370821 +0000 UTC m=+1098.911071593" Jan 30 17:14:56 crc kubenswrapper[4875]: I0130 17:14:56.287060 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:14:56 crc kubenswrapper[4875]: I0130 17:14:56.288696 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:14:56 crc kubenswrapper[4875]: I0130 17:14:56.933212 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-64bd9bf7b6-llx69" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.144694 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm"] Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.146094 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.148108 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.154536 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm"] Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.156057 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.235049 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-secret-volume\") pod \"collect-profiles-29496555-gjwkm\" (UID: \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.235118 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsqxl\" (UniqueName: \"kubernetes.io/projected/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-kube-api-access-rsqxl\") pod \"collect-profiles-29496555-gjwkm\" (UID: \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.235150 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-config-volume\") pod \"collect-profiles-29496555-gjwkm\" (UID: \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.337160 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-secret-volume\") pod \"collect-profiles-29496555-gjwkm\" (UID: \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.337232 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsqxl\" (UniqueName: \"kubernetes.io/projected/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-kube-api-access-rsqxl\") pod \"collect-profiles-29496555-gjwkm\" (UID: \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.337258 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-config-volume\") pod \"collect-profiles-29496555-gjwkm\" (UID: \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.338307 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-config-volume\") pod \"collect-profiles-29496555-gjwkm\" (UID: \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.342357 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-secret-volume\") pod \"collect-profiles-29496555-gjwkm\" (UID: \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.352960 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsqxl\" (UniqueName: \"kubernetes.io/projected/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-kube-api-access-rsqxl\") pod \"collect-profiles-29496555-gjwkm\" (UID: \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.466721 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" Jan 30 17:15:00 crc kubenswrapper[4875]: I0130 17:15:00.883903 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm"] Jan 30 17:15:01 crc kubenswrapper[4875]: I0130 17:15:01.432755 4875 generic.go:334] "Generic (PLEG): container finished" podID="5bb26a8a-d769-44c6-8e55-e4c0902c89c0" containerID="465ea2abfe69d6664d9adbfa51bbbf8dfdefa67acab02cf6dc17fe163048ee34" exitCode=0 Jan 30 17:15:01 crc kubenswrapper[4875]: I0130 17:15:01.432905 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" event={"ID":"5bb26a8a-d769-44c6-8e55-e4c0902c89c0","Type":"ContainerDied","Data":"465ea2abfe69d6664d9adbfa51bbbf8dfdefa67acab02cf6dc17fe163048ee34"} Jan 30 17:15:01 crc kubenswrapper[4875]: I0130 17:15:01.433071 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" event={"ID":"5bb26a8a-d769-44c6-8e55-e4c0902c89c0","Type":"ContainerStarted","Data":"11ecf235d2943125b499dcce41294b0bcd8affb6deb3a7b11c3c1a11af8973f4"} Jan 30 17:15:02 crc kubenswrapper[4875]: I0130 17:15:02.936041 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" Jan 30 17:15:03 crc kubenswrapper[4875]: I0130 17:15:03.083558 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-config-volume\") pod \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\" (UID: \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\") " Jan 30 17:15:03 crc kubenswrapper[4875]: I0130 17:15:03.083650 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsqxl\" (UniqueName: \"kubernetes.io/projected/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-kube-api-access-rsqxl\") pod \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\" (UID: \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\") " Jan 30 17:15:03 crc kubenswrapper[4875]: I0130 17:15:03.083682 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-secret-volume\") pod \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\" (UID: \"5bb26a8a-d769-44c6-8e55-e4c0902c89c0\") " Jan 30 17:15:03 crc kubenswrapper[4875]: I0130 17:15:03.084354 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-config-volume" (OuterVolumeSpecName: "config-volume") pod "5bb26a8a-d769-44c6-8e55-e4c0902c89c0" (UID: "5bb26a8a-d769-44c6-8e55-e4c0902c89c0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4875]: I0130 17:15:03.090398 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5bb26a8a-d769-44c6-8e55-e4c0902c89c0" (UID: "5bb26a8a-d769-44c6-8e55-e4c0902c89c0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4875]: I0130 17:15:03.091273 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-kube-api-access-rsqxl" (OuterVolumeSpecName: "kube-api-access-rsqxl") pod "5bb26a8a-d769-44c6-8e55-e4c0902c89c0" (UID: "5bb26a8a-d769-44c6-8e55-e4c0902c89c0"). InnerVolumeSpecName "kube-api-access-rsqxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4875]: I0130 17:15:03.185753 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsqxl\" (UniqueName: \"kubernetes.io/projected/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-kube-api-access-rsqxl\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:03 crc kubenswrapper[4875]: I0130 17:15:03.185795 4875 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:03 crc kubenswrapper[4875]: I0130 17:15:03.185804 4875 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bb26a8a-d769-44c6-8e55-e4c0902c89c0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:03 crc kubenswrapper[4875]: I0130 17:15:03.449207 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" event={"ID":"5bb26a8a-d769-44c6-8e55-e4c0902c89c0","Type":"ContainerDied","Data":"11ecf235d2943125b499dcce41294b0bcd8affb6deb3a7b11c3c1a11af8973f4"} Jan 30 17:15:03 crc kubenswrapper[4875]: I0130 17:15:03.449253 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11ecf235d2943125b499dcce41294b0bcd8affb6deb3a7b11c3c1a11af8973f4" Jan 30 17:15:03 crc kubenswrapper[4875]: I0130 17:15:03.449329 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-gjwkm" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.707833 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-z4tpp"] Jan 30 17:15:22 crc kubenswrapper[4875]: E0130 17:15:22.708676 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb26a8a-d769-44c6-8e55-e4c0902c89c0" containerName="collect-profiles" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.708688 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb26a8a-d769-44c6-8e55-e4c0902c89c0" containerName="collect-profiles" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.708847 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bb26a8a-d769-44c6-8e55-e4c0902c89c0" containerName="collect-profiles" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.709373 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-z4tpp" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.719995 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-z4tpp"] Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.728884 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k"] Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.730135 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.741803 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.753689 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k"] Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.789725 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb305e99-aa29-41e8-97de-f49f2fdd8e7b-operator-scripts\") pod \"nova-api-db-create-z4tpp\" (UID: \"bb305e99-aa29-41e8-97de-f49f2fdd8e7b\") " pod="nova-kuttl-default/nova-api-db-create-z4tpp" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.789802 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3d7a5e-cb17-44f8-9898-c41e0cff56bf-operator-scripts\") pod \"nova-api-dd3c-account-create-update-fpg7k\" (UID: \"5f3d7a5e-cb17-44f8-9898-c41e0cff56bf\") " pod="nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.789875 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hptf\" (UniqueName: \"kubernetes.io/projected/5f3d7a5e-cb17-44f8-9898-c41e0cff56bf-kube-api-access-4hptf\") pod \"nova-api-dd3c-account-create-update-fpg7k\" (UID: \"5f3d7a5e-cb17-44f8-9898-c41e0cff56bf\") " pod="nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.789961 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjcfp\" (UniqueName: \"kubernetes.io/projected/bb305e99-aa29-41e8-97de-f49f2fdd8e7b-kube-api-access-qjcfp\") pod \"nova-api-db-create-z4tpp\" (UID: \"bb305e99-aa29-41e8-97de-f49f2fdd8e7b\") " pod="nova-kuttl-default/nova-api-db-create-z4tpp" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.823278 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-jkxb9"] Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.824398 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-jkxb9" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.832529 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-jkxb9"] Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.892888 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3d7a5e-cb17-44f8-9898-c41e0cff56bf-operator-scripts\") pod \"nova-api-dd3c-account-create-update-fpg7k\" (UID: \"5f3d7a5e-cb17-44f8-9898-c41e0cff56bf\") " pod="nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.892990 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hptf\" (UniqueName: \"kubernetes.io/projected/5f3d7a5e-cb17-44f8-9898-c41e0cff56bf-kube-api-access-4hptf\") pod \"nova-api-dd3c-account-create-update-fpg7k\" (UID: \"5f3d7a5e-cb17-44f8-9898-c41e0cff56bf\") " pod="nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.893058 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e1d2c4-624d-42d8-93fc-d203ec6a9c0f-operator-scripts\") pod \"nova-cell0-db-create-jkxb9\" (UID: \"84e1d2c4-624d-42d8-93fc-d203ec6a9c0f\") " pod="nova-kuttl-default/nova-cell0-db-create-jkxb9" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.893130 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjcfp\" (UniqueName: \"kubernetes.io/projected/bb305e99-aa29-41e8-97de-f49f2fdd8e7b-kube-api-access-qjcfp\") pod \"nova-api-db-create-z4tpp\" (UID: \"bb305e99-aa29-41e8-97de-f49f2fdd8e7b\") " pod="nova-kuttl-default/nova-api-db-create-z4tpp" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.893170 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb305e99-aa29-41e8-97de-f49f2fdd8e7b-operator-scripts\") pod \"nova-api-db-create-z4tpp\" (UID: \"bb305e99-aa29-41e8-97de-f49f2fdd8e7b\") " pod="nova-kuttl-default/nova-api-db-create-z4tpp" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.893193 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2wjs\" (UniqueName: \"kubernetes.io/projected/84e1d2c4-624d-42d8-93fc-d203ec6a9c0f-kube-api-access-l2wjs\") pod \"nova-cell0-db-create-jkxb9\" (UID: \"84e1d2c4-624d-42d8-93fc-d203ec6a9c0f\") " pod="nova-kuttl-default/nova-cell0-db-create-jkxb9" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.894445 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb305e99-aa29-41e8-97de-f49f2fdd8e7b-operator-scripts\") pod \"nova-api-db-create-z4tpp\" (UID: \"bb305e99-aa29-41e8-97de-f49f2fdd8e7b\") " pod="nova-kuttl-default/nova-api-db-create-z4tpp" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.894469 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3d7a5e-cb17-44f8-9898-c41e0cff56bf-operator-scripts\") pod \"nova-api-dd3c-account-create-update-fpg7k\" (UID: \"5f3d7a5e-cb17-44f8-9898-c41e0cff56bf\") " pod="nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.918431 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hptf\" (UniqueName: \"kubernetes.io/projected/5f3d7a5e-cb17-44f8-9898-c41e0cff56bf-kube-api-access-4hptf\") pod \"nova-api-dd3c-account-create-update-fpg7k\" (UID: \"5f3d7a5e-cb17-44f8-9898-c41e0cff56bf\") " pod="nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.928206 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjcfp\" (UniqueName: \"kubernetes.io/projected/bb305e99-aa29-41e8-97de-f49f2fdd8e7b-kube-api-access-qjcfp\") pod \"nova-api-db-create-z4tpp\" (UID: \"bb305e99-aa29-41e8-97de-f49f2fdd8e7b\") " pod="nova-kuttl-default/nova-api-db-create-z4tpp" Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.934401 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-kgb4q"] Jan 30 17:15:22 crc kubenswrapper[4875]: I0130 17:15:22.935602 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-kgb4q" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:22.998245 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr"] Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:22.999755 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.002651 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.007941 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-kgb4q"] Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.016679 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgbx6\" (UniqueName: \"kubernetes.io/projected/529f3b7f-281a-4cd3-a0be-885fc730c789-kube-api-access-rgbx6\") pod \"nova-cell1-db-create-kgb4q\" (UID: \"529f3b7f-281a-4cd3-a0be-885fc730c789\") " pod="nova-kuttl-default/nova-cell1-db-create-kgb4q" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.016757 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/529f3b7f-281a-4cd3-a0be-885fc730c789-operator-scripts\") pod \"nova-cell1-db-create-kgb4q\" (UID: \"529f3b7f-281a-4cd3-a0be-885fc730c789\") " pod="nova-kuttl-default/nova-cell1-db-create-kgb4q" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.016805 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e1d2c4-624d-42d8-93fc-d203ec6a9c0f-operator-scripts\") pod \"nova-cell0-db-create-jkxb9\" (UID: \"84e1d2c4-624d-42d8-93fc-d203ec6a9c0f\") " pod="nova-kuttl-default/nova-cell0-db-create-jkxb9" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.016909 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2wjs\" (UniqueName: \"kubernetes.io/projected/84e1d2c4-624d-42d8-93fc-d203ec6a9c0f-kube-api-access-l2wjs\") pod \"nova-cell0-db-create-jkxb9\" (UID: \"84e1d2c4-624d-42d8-93fc-d203ec6a9c0f\") " pod="nova-kuttl-default/nova-cell0-db-create-jkxb9" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.017948 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e1d2c4-624d-42d8-93fc-d203ec6a9c0f-operator-scripts\") pod \"nova-cell0-db-create-jkxb9\" (UID: \"84e1d2c4-624d-42d8-93fc-d203ec6a9c0f\") " pod="nova-kuttl-default/nova-cell0-db-create-jkxb9" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.033676 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-z4tpp" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.044635 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr"] Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.051148 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.074265 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2wjs\" (UniqueName: \"kubernetes.io/projected/84e1d2c4-624d-42d8-93fc-d203ec6a9c0f-kube-api-access-l2wjs\") pod \"nova-cell0-db-create-jkxb9\" (UID: \"84e1d2c4-624d-42d8-93fc-d203ec6a9c0f\") " pod="nova-kuttl-default/nova-cell0-db-create-jkxb9" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.117883 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/529f3b7f-281a-4cd3-a0be-885fc730c789-operator-scripts\") pod \"nova-cell1-db-create-kgb4q\" (UID: \"529f3b7f-281a-4cd3-a0be-885fc730c789\") " pod="nova-kuttl-default/nova-cell1-db-create-kgb4q" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.117983 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346898dc-db0f-4f45-aa32-d4234d759042-operator-scripts\") pod \"nova-cell0-bf0b-account-create-update-p9bpr\" (UID: \"346898dc-db0f-4f45-aa32-d4234d759042\") " pod="nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.118026 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqvsd\" (UniqueName: \"kubernetes.io/projected/346898dc-db0f-4f45-aa32-d4234d759042-kube-api-access-xqvsd\") pod \"nova-cell0-bf0b-account-create-update-p9bpr\" (UID: \"346898dc-db0f-4f45-aa32-d4234d759042\") " pod="nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.118058 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgbx6\" (UniqueName: \"kubernetes.io/projected/529f3b7f-281a-4cd3-a0be-885fc730c789-kube-api-access-rgbx6\") pod \"nova-cell1-db-create-kgb4q\" (UID: \"529f3b7f-281a-4cd3-a0be-885fc730c789\") " pod="nova-kuttl-default/nova-cell1-db-create-kgb4q" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.119995 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/529f3b7f-281a-4cd3-a0be-885fc730c789-operator-scripts\") pod \"nova-cell1-db-create-kgb4q\" (UID: \"529f3b7f-281a-4cd3-a0be-885fc730c789\") " pod="nova-kuttl-default/nova-cell1-db-create-kgb4q" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.140529 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgbx6\" (UniqueName: \"kubernetes.io/projected/529f3b7f-281a-4cd3-a0be-885fc730c789-kube-api-access-rgbx6\") pod \"nova-cell1-db-create-kgb4q\" (UID: \"529f3b7f-281a-4cd3-a0be-885fc730c789\") " pod="nova-kuttl-default/nova-cell1-db-create-kgb4q" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.164222 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-jkxb9" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.222735 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346898dc-db0f-4f45-aa32-d4234d759042-operator-scripts\") pod \"nova-cell0-bf0b-account-create-update-p9bpr\" (UID: \"346898dc-db0f-4f45-aa32-d4234d759042\") " pod="nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.222814 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqvsd\" (UniqueName: \"kubernetes.io/projected/346898dc-db0f-4f45-aa32-d4234d759042-kube-api-access-xqvsd\") pod \"nova-cell0-bf0b-account-create-update-p9bpr\" (UID: \"346898dc-db0f-4f45-aa32-d4234d759042\") " pod="nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.225996 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt"] Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.226968 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.227723 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346898dc-db0f-4f45-aa32-d4234d759042-operator-scripts\") pod \"nova-cell0-bf0b-account-create-update-p9bpr\" (UID: \"346898dc-db0f-4f45-aa32-d4234d759042\") " pod="nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.231035 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.234019 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt"] Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.244015 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqvsd\" (UniqueName: \"kubernetes.io/projected/346898dc-db0f-4f45-aa32-d4234d759042-kube-api-access-xqvsd\") pod \"nova-cell0-bf0b-account-create-update-p9bpr\" (UID: \"346898dc-db0f-4f45-aa32-d4234d759042\") " pod="nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.324715 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4hfw\" (UniqueName: \"kubernetes.io/projected/2984f7e2-f590-4d66-ab1b-76ee8d3a7869-kube-api-access-c4hfw\") pod \"nova-cell1-cf36-account-create-update-sfmpt\" (UID: \"2984f7e2-f590-4d66-ab1b-76ee8d3a7869\") " pod="nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.324768 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2984f7e2-f590-4d66-ab1b-76ee8d3a7869-operator-scripts\") pod \"nova-cell1-cf36-account-create-update-sfmpt\" (UID: \"2984f7e2-f590-4d66-ab1b-76ee8d3a7869\") " pod="nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.327012 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-kgb4q" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.427011 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4hfw\" (UniqueName: \"kubernetes.io/projected/2984f7e2-f590-4d66-ab1b-76ee8d3a7869-kube-api-access-c4hfw\") pod \"nova-cell1-cf36-account-create-update-sfmpt\" (UID: \"2984f7e2-f590-4d66-ab1b-76ee8d3a7869\") " pod="nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.427074 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2984f7e2-f590-4d66-ab1b-76ee8d3a7869-operator-scripts\") pod \"nova-cell1-cf36-account-create-update-sfmpt\" (UID: \"2984f7e2-f590-4d66-ab1b-76ee8d3a7869\") " pod="nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.428207 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2984f7e2-f590-4d66-ab1b-76ee8d3a7869-operator-scripts\") pod \"nova-cell1-cf36-account-create-update-sfmpt\" (UID: \"2984f7e2-f590-4d66-ab1b-76ee8d3a7869\") " pod="nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.442038 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.445784 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4hfw\" (UniqueName: \"kubernetes.io/projected/2984f7e2-f590-4d66-ab1b-76ee8d3a7869-kube-api-access-c4hfw\") pod \"nova-cell1-cf36-account-create-update-sfmpt\" (UID: \"2984f7e2-f590-4d66-ab1b-76ee8d3a7869\") " pod="nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.505898 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k"] Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.551893 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt" Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.576634 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-z4tpp"] Jan 30 17:15:23 crc kubenswrapper[4875]: W0130 17:15:23.581686 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb305e99_aa29_41e8_97de_f49f2fdd8e7b.slice/crio-ed032b25445d4dff25b920e9122e90c7b5ba7a5c489ddf5279b03680a22df454 WatchSource:0}: Error finding container ed032b25445d4dff25b920e9122e90c7b5ba7a5c489ddf5279b03680a22df454: Status 404 returned error can't find the container with id ed032b25445d4dff25b920e9122e90c7b5ba7a5c489ddf5279b03680a22df454 Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.610673 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-z4tpp" event={"ID":"bb305e99-aa29-41e8-97de-f49f2fdd8e7b","Type":"ContainerStarted","Data":"ed032b25445d4dff25b920e9122e90c7b5ba7a5c489ddf5279b03680a22df454"} Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.613097 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k" event={"ID":"5f3d7a5e-cb17-44f8-9898-c41e0cff56bf","Type":"ContainerStarted","Data":"e49e025cdadb3e8dcba5b0f509ed9049de41a359ed9a90841e50b1356280b0e4"} Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.669337 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-jkxb9"] Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.750859 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-kgb4q"] Jan 30 17:15:23 crc kubenswrapper[4875]: W0130 17:15:23.752633 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod529f3b7f_281a_4cd3_a0be_885fc730c789.slice/crio-6c19ddef47496cadd8efbfeee0f63ce95b35fc2077bac6cc6c67fabbafc1bfa1 WatchSource:0}: Error finding container 6c19ddef47496cadd8efbfeee0f63ce95b35fc2077bac6cc6c67fabbafc1bfa1: Status 404 returned error can't find the container with id 6c19ddef47496cadd8efbfeee0f63ce95b35fc2077bac6cc6c67fabbafc1bfa1 Jan 30 17:15:23 crc kubenswrapper[4875]: W0130 17:15:23.904920 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod346898dc_db0f_4f45_aa32_d4234d759042.slice/crio-5a067e59736f6fd06b57cc93fcd8646800606c2ae60c06004014ec8c42d8b995 WatchSource:0}: Error finding container 5a067e59736f6fd06b57cc93fcd8646800606c2ae60c06004014ec8c42d8b995: Status 404 returned error can't find the container with id 5a067e59736f6fd06b57cc93fcd8646800606c2ae60c06004014ec8c42d8b995 Jan 30 17:15:23 crc kubenswrapper[4875]: I0130 17:15:23.910944 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr"] Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.072652 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt"] Jan 30 17:15:24 crc kubenswrapper[4875]: W0130 17:15:24.134487 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2984f7e2_f590_4d66_ab1b_76ee8d3a7869.slice/crio-1006df5b66dddedee8275ac5b66a724a77f07283570fdcaeeebd85e40e3c8179 WatchSource:0}: Error finding container 1006df5b66dddedee8275ac5b66a724a77f07283570fdcaeeebd85e40e3c8179: Status 404 returned error can't find the container with id 1006df5b66dddedee8275ac5b66a724a77f07283570fdcaeeebd85e40e3c8179 Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.621999 4875 generic.go:334] "Generic (PLEG): container finished" podID="5f3d7a5e-cb17-44f8-9898-c41e0cff56bf" containerID="18e6d6fcc136cd1be771cc4f120dd5e70dcc57b0a3cdab20e3d1dc635d89f80f" exitCode=0 Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.622102 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k" event={"ID":"5f3d7a5e-cb17-44f8-9898-c41e0cff56bf","Type":"ContainerDied","Data":"18e6d6fcc136cd1be771cc4f120dd5e70dcc57b0a3cdab20e3d1dc635d89f80f"} Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.624472 4875 generic.go:334] "Generic (PLEG): container finished" podID="2984f7e2-f590-4d66-ab1b-76ee8d3a7869" containerID="2ab78dd05c9c2b5ed5d5660300596887791c39e1464e42050bc08d8db0d931ad" exitCode=0 Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.624534 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt" event={"ID":"2984f7e2-f590-4d66-ab1b-76ee8d3a7869","Type":"ContainerDied","Data":"2ab78dd05c9c2b5ed5d5660300596887791c39e1464e42050bc08d8db0d931ad"} Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.624553 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt" event={"ID":"2984f7e2-f590-4d66-ab1b-76ee8d3a7869","Type":"ContainerStarted","Data":"1006df5b66dddedee8275ac5b66a724a77f07283570fdcaeeebd85e40e3c8179"} Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.626264 4875 generic.go:334] "Generic (PLEG): container finished" podID="bb305e99-aa29-41e8-97de-f49f2fdd8e7b" containerID="94eff01f095b89372f2d9f2896f0d252cbc05ca211c197972fd090c2b6bae45c" exitCode=0 Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.626304 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-z4tpp" event={"ID":"bb305e99-aa29-41e8-97de-f49f2fdd8e7b","Type":"ContainerDied","Data":"94eff01f095b89372f2d9f2896f0d252cbc05ca211c197972fd090c2b6bae45c"} Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.628052 4875 generic.go:334] "Generic (PLEG): container finished" podID="529f3b7f-281a-4cd3-a0be-885fc730c789" containerID="94167c01229aeec8cf619d60e4aacab5d84e8f547b093595aaccda02f1d69fd0" exitCode=0 Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.628090 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-kgb4q" event={"ID":"529f3b7f-281a-4cd3-a0be-885fc730c789","Type":"ContainerDied","Data":"94167c01229aeec8cf619d60e4aacab5d84e8f547b093595aaccda02f1d69fd0"} Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.628136 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-kgb4q" event={"ID":"529f3b7f-281a-4cd3-a0be-885fc730c789","Type":"ContainerStarted","Data":"6c19ddef47496cadd8efbfeee0f63ce95b35fc2077bac6cc6c67fabbafc1bfa1"} Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.629709 4875 generic.go:334] "Generic (PLEG): container finished" podID="84e1d2c4-624d-42d8-93fc-d203ec6a9c0f" containerID="fa165cdef2cb82c68a99afbe4896b77d0c32fde2b5b72a6252d631b0c9c1cd70" exitCode=0 Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.629804 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-jkxb9" event={"ID":"84e1d2c4-624d-42d8-93fc-d203ec6a9c0f","Type":"ContainerDied","Data":"fa165cdef2cb82c68a99afbe4896b77d0c32fde2b5b72a6252d631b0c9c1cd70"} Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.629841 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-jkxb9" event={"ID":"84e1d2c4-624d-42d8-93fc-d203ec6a9c0f","Type":"ContainerStarted","Data":"5e60acec692daf3951a9b6f63508f024b067a60c910c64615ec48151f9006f53"} Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.632479 4875 generic.go:334] "Generic (PLEG): container finished" podID="346898dc-db0f-4f45-aa32-d4234d759042" containerID="06e4849a25106592fdd88b8f37251f2cd1240f332fe06fb5eee071de7b904aea" exitCode=0 Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.632512 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr" event={"ID":"346898dc-db0f-4f45-aa32-d4234d759042","Type":"ContainerDied","Data":"06e4849a25106592fdd88b8f37251f2cd1240f332fe06fb5eee071de7b904aea"} Jan 30 17:15:24 crc kubenswrapper[4875]: I0130 17:15:24.632538 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr" event={"ID":"346898dc-db0f-4f45-aa32-d4234d759042","Type":"ContainerStarted","Data":"5a067e59736f6fd06b57cc93fcd8646800606c2ae60c06004014ec8c42d8b995"} Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.048756 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.168867 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hptf\" (UniqueName: \"kubernetes.io/projected/5f3d7a5e-cb17-44f8-9898-c41e0cff56bf-kube-api-access-4hptf\") pod \"5f3d7a5e-cb17-44f8-9898-c41e0cff56bf\" (UID: \"5f3d7a5e-cb17-44f8-9898-c41e0cff56bf\") " Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.169075 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3d7a5e-cb17-44f8-9898-c41e0cff56bf-operator-scripts\") pod \"5f3d7a5e-cb17-44f8-9898-c41e0cff56bf\" (UID: \"5f3d7a5e-cb17-44f8-9898-c41e0cff56bf\") " Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.169621 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f3d7a5e-cb17-44f8-9898-c41e0cff56bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f3d7a5e-cb17-44f8-9898-c41e0cff56bf" (UID: "5f3d7a5e-cb17-44f8-9898-c41e0cff56bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.174321 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f3d7a5e-cb17-44f8-9898-c41e0cff56bf-kube-api-access-4hptf" (OuterVolumeSpecName: "kube-api-access-4hptf") pod "5f3d7a5e-cb17-44f8-9898-c41e0cff56bf" (UID: "5f3d7a5e-cb17-44f8-9898-c41e0cff56bf"). InnerVolumeSpecName "kube-api-access-4hptf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.270921 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f3d7a5e-cb17-44f8-9898-c41e0cff56bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.270948 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hptf\" (UniqueName: \"kubernetes.io/projected/5f3d7a5e-cb17-44f8-9898-c41e0cff56bf-kube-api-access-4hptf\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.288079 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.288176 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.288248 4875 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.290360 4875 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6514542be49997aad4594ad0a6547ac470439752a0efaf44fa7c391eb010bcf6"} pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.290445 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" containerID="cri-o://6514542be49997aad4594ad0a6547ac470439752a0efaf44fa7c391eb010bcf6" gracePeriod=600 Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.661361 4875 generic.go:334] "Generic (PLEG): container finished" podID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerID="6514542be49997aad4594ad0a6547ac470439752a0efaf44fa7c391eb010bcf6" exitCode=0 Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.661664 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerDied","Data":"6514542be49997aad4594ad0a6547ac470439752a0efaf44fa7c391eb010bcf6"} Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.674160 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerStarted","Data":"48e3a087955728186281898d070efcfe8a3f5df09e6720b6da52c18157fc11ce"} Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.674248 4875 scope.go:117] "RemoveContainer" containerID="ed42a4c14dffd4d7e8ff0992005f668baba6e088536dd037290ec2423738d85a" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.681097 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k" event={"ID":"5f3d7a5e-cb17-44f8-9898-c41e0cff56bf","Type":"ContainerDied","Data":"e49e025cdadb3e8dcba5b0f509ed9049de41a359ed9a90841e50b1356280b0e4"} Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.681134 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e49e025cdadb3e8dcba5b0f509ed9049de41a359ed9a90841e50b1356280b0e4" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.681183 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.720176 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-z4tpp" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.747623 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-kgb4q" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.778270 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgbx6\" (UniqueName: \"kubernetes.io/projected/529f3b7f-281a-4cd3-a0be-885fc730c789-kube-api-access-rgbx6\") pod \"529f3b7f-281a-4cd3-a0be-885fc730c789\" (UID: \"529f3b7f-281a-4cd3-a0be-885fc730c789\") " Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.778522 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/529f3b7f-281a-4cd3-a0be-885fc730c789-operator-scripts\") pod \"529f3b7f-281a-4cd3-a0be-885fc730c789\" (UID: \"529f3b7f-281a-4cd3-a0be-885fc730c789\") " Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.778645 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjcfp\" (UniqueName: \"kubernetes.io/projected/bb305e99-aa29-41e8-97de-f49f2fdd8e7b-kube-api-access-qjcfp\") pod \"bb305e99-aa29-41e8-97de-f49f2fdd8e7b\" (UID: \"bb305e99-aa29-41e8-97de-f49f2fdd8e7b\") " Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.778706 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb305e99-aa29-41e8-97de-f49f2fdd8e7b-operator-scripts\") pod \"bb305e99-aa29-41e8-97de-f49f2fdd8e7b\" (UID: \"bb305e99-aa29-41e8-97de-f49f2fdd8e7b\") " Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.779213 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529f3b7f-281a-4cd3-a0be-885fc730c789-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "529f3b7f-281a-4cd3-a0be-885fc730c789" (UID: "529f3b7f-281a-4cd3-a0be-885fc730c789"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.780909 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb305e99-aa29-41e8-97de-f49f2fdd8e7b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb305e99-aa29-41e8-97de-f49f2fdd8e7b" (UID: "bb305e99-aa29-41e8-97de-f49f2fdd8e7b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.785148 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-jkxb9" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.785277 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb305e99-aa29-41e8-97de-f49f2fdd8e7b-kube-api-access-qjcfp" (OuterVolumeSpecName: "kube-api-access-qjcfp") pod "bb305e99-aa29-41e8-97de-f49f2fdd8e7b" (UID: "bb305e99-aa29-41e8-97de-f49f2fdd8e7b"). InnerVolumeSpecName "kube-api-access-qjcfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.785285 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529f3b7f-281a-4cd3-a0be-885fc730c789-kube-api-access-rgbx6" (OuterVolumeSpecName: "kube-api-access-rgbx6") pod "529f3b7f-281a-4cd3-a0be-885fc730c789" (UID: "529f3b7f-281a-4cd3-a0be-885fc730c789"). InnerVolumeSpecName "kube-api-access-rgbx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.806368 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.808655 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.879945 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4hfw\" (UniqueName: \"kubernetes.io/projected/2984f7e2-f590-4d66-ab1b-76ee8d3a7869-kube-api-access-c4hfw\") pod \"2984f7e2-f590-4d66-ab1b-76ee8d3a7869\" (UID: \"2984f7e2-f590-4d66-ab1b-76ee8d3a7869\") " Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.880022 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2wjs\" (UniqueName: \"kubernetes.io/projected/84e1d2c4-624d-42d8-93fc-d203ec6a9c0f-kube-api-access-l2wjs\") pod \"84e1d2c4-624d-42d8-93fc-d203ec6a9c0f\" (UID: \"84e1d2c4-624d-42d8-93fc-d203ec6a9c0f\") " Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.880056 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346898dc-db0f-4f45-aa32-d4234d759042-operator-scripts\") pod \"346898dc-db0f-4f45-aa32-d4234d759042\" (UID: \"346898dc-db0f-4f45-aa32-d4234d759042\") " Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.880089 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e1d2c4-624d-42d8-93fc-d203ec6a9c0f-operator-scripts\") pod \"84e1d2c4-624d-42d8-93fc-d203ec6a9c0f\" (UID: \"84e1d2c4-624d-42d8-93fc-d203ec6a9c0f\") " Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.880164 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2984f7e2-f590-4d66-ab1b-76ee8d3a7869-operator-scripts\") pod \"2984f7e2-f590-4d66-ab1b-76ee8d3a7869\" (UID: \"2984f7e2-f590-4d66-ab1b-76ee8d3a7869\") " Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.880283 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqvsd\" (UniqueName: \"kubernetes.io/projected/346898dc-db0f-4f45-aa32-d4234d759042-kube-api-access-xqvsd\") pod \"346898dc-db0f-4f45-aa32-d4234d759042\" (UID: \"346898dc-db0f-4f45-aa32-d4234d759042\") " Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.880633 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb305e99-aa29-41e8-97de-f49f2fdd8e7b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.880656 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgbx6\" (UniqueName: \"kubernetes.io/projected/529f3b7f-281a-4cd3-a0be-885fc730c789-kube-api-access-rgbx6\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.880670 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/529f3b7f-281a-4cd3-a0be-885fc730c789-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.880682 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjcfp\" (UniqueName: \"kubernetes.io/projected/bb305e99-aa29-41e8-97de-f49f2fdd8e7b-kube-api-access-qjcfp\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.880743 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2984f7e2-f590-4d66-ab1b-76ee8d3a7869-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2984f7e2-f590-4d66-ab1b-76ee8d3a7869" (UID: "2984f7e2-f590-4d66-ab1b-76ee8d3a7869"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.880854 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/346898dc-db0f-4f45-aa32-d4234d759042-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "346898dc-db0f-4f45-aa32-d4234d759042" (UID: "346898dc-db0f-4f45-aa32-d4234d759042"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.880862 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e1d2c4-624d-42d8-93fc-d203ec6a9c0f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84e1d2c4-624d-42d8-93fc-d203ec6a9c0f" (UID: "84e1d2c4-624d-42d8-93fc-d203ec6a9c0f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.882293 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e1d2c4-624d-42d8-93fc-d203ec6a9c0f-kube-api-access-l2wjs" (OuterVolumeSpecName: "kube-api-access-l2wjs") pod "84e1d2c4-624d-42d8-93fc-d203ec6a9c0f" (UID: "84e1d2c4-624d-42d8-93fc-d203ec6a9c0f"). InnerVolumeSpecName "kube-api-access-l2wjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.883141 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346898dc-db0f-4f45-aa32-d4234d759042-kube-api-access-xqvsd" (OuterVolumeSpecName: "kube-api-access-xqvsd") pod "346898dc-db0f-4f45-aa32-d4234d759042" (UID: "346898dc-db0f-4f45-aa32-d4234d759042"). InnerVolumeSpecName "kube-api-access-xqvsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.883706 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2984f7e2-f590-4d66-ab1b-76ee8d3a7869-kube-api-access-c4hfw" (OuterVolumeSpecName: "kube-api-access-c4hfw") pod "2984f7e2-f590-4d66-ab1b-76ee8d3a7869" (UID: "2984f7e2-f590-4d66-ab1b-76ee8d3a7869"). InnerVolumeSpecName "kube-api-access-c4hfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.982494 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqvsd\" (UniqueName: \"kubernetes.io/projected/346898dc-db0f-4f45-aa32-d4234d759042-kube-api-access-xqvsd\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.982535 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4hfw\" (UniqueName: \"kubernetes.io/projected/2984f7e2-f590-4d66-ab1b-76ee8d3a7869-kube-api-access-c4hfw\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.982549 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2wjs\" (UniqueName: \"kubernetes.io/projected/84e1d2c4-624d-42d8-93fc-d203ec6a9c0f-kube-api-access-l2wjs\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.982561 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346898dc-db0f-4f45-aa32-d4234d759042-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.982573 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e1d2c4-624d-42d8-93fc-d203ec6a9c0f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:26 crc kubenswrapper[4875]: I0130 17:15:26.982606 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2984f7e2-f590-4d66-ab1b-76ee8d3a7869-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.690711 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr" event={"ID":"346898dc-db0f-4f45-aa32-d4234d759042","Type":"ContainerDied","Data":"5a067e59736f6fd06b57cc93fcd8646800606c2ae60c06004014ec8c42d8b995"} Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.690977 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a067e59736f6fd06b57cc93fcd8646800606c2ae60c06004014ec8c42d8b995" Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.690749 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr" Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.694436 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt" Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.694446 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt" event={"ID":"2984f7e2-f590-4d66-ab1b-76ee8d3a7869","Type":"ContainerDied","Data":"1006df5b66dddedee8275ac5b66a724a77f07283570fdcaeeebd85e40e3c8179"} Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.694517 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1006df5b66dddedee8275ac5b66a724a77f07283570fdcaeeebd85e40e3c8179" Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.697485 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-z4tpp" Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.697503 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-z4tpp" event={"ID":"bb305e99-aa29-41e8-97de-f49f2fdd8e7b","Type":"ContainerDied","Data":"ed032b25445d4dff25b920e9122e90c7b5ba7a5c489ddf5279b03680a22df454"} Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.697536 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed032b25445d4dff25b920e9122e90c7b5ba7a5c489ddf5279b03680a22df454" Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.699109 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-kgb4q" event={"ID":"529f3b7f-281a-4cd3-a0be-885fc730c789","Type":"ContainerDied","Data":"6c19ddef47496cadd8efbfeee0f63ce95b35fc2077bac6cc6c67fabbafc1bfa1"} Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.699137 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c19ddef47496cadd8efbfeee0f63ce95b35fc2077bac6cc6c67fabbafc1bfa1" Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.699140 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-kgb4q" Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.700771 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-jkxb9" event={"ID":"84e1d2c4-624d-42d8-93fc-d203ec6a9c0f","Type":"ContainerDied","Data":"5e60acec692daf3951a9b6f63508f024b067a60c910c64615ec48151f9006f53"} Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.700801 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e60acec692daf3951a9b6f63508f024b067a60c910c64615ec48151f9006f53" Jan 30 17:15:27 crc kubenswrapper[4875]: I0130 17:15:27.700828 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-jkxb9" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.339508 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj"] Jan 30 17:15:28 crc kubenswrapper[4875]: E0130 17:15:28.340091 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2984f7e2-f590-4d66-ab1b-76ee8d3a7869" containerName="mariadb-account-create-update" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.340113 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="2984f7e2-f590-4d66-ab1b-76ee8d3a7869" containerName="mariadb-account-create-update" Jan 30 17:15:28 crc kubenswrapper[4875]: E0130 17:15:28.340151 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346898dc-db0f-4f45-aa32-d4234d759042" containerName="mariadb-account-create-update" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.340160 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="346898dc-db0f-4f45-aa32-d4234d759042" containerName="mariadb-account-create-update" Jan 30 17:15:28 crc kubenswrapper[4875]: E0130 17:15:28.340178 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3d7a5e-cb17-44f8-9898-c41e0cff56bf" containerName="mariadb-account-create-update" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.340188 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3d7a5e-cb17-44f8-9898-c41e0cff56bf" containerName="mariadb-account-create-update" Jan 30 17:15:28 crc kubenswrapper[4875]: E0130 17:15:28.340206 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e1d2c4-624d-42d8-93fc-d203ec6a9c0f" containerName="mariadb-database-create" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.340215 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e1d2c4-624d-42d8-93fc-d203ec6a9c0f" containerName="mariadb-database-create" Jan 30 17:15:28 crc kubenswrapper[4875]: E0130 17:15:28.340234 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb305e99-aa29-41e8-97de-f49f2fdd8e7b" containerName="mariadb-database-create" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.340243 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb305e99-aa29-41e8-97de-f49f2fdd8e7b" containerName="mariadb-database-create" Jan 30 17:15:28 crc kubenswrapper[4875]: E0130 17:15:28.340320 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529f3b7f-281a-4cd3-a0be-885fc730c789" containerName="mariadb-database-create" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.340332 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="529f3b7f-281a-4cd3-a0be-885fc730c789" containerName="mariadb-database-create" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.340577 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f3d7a5e-cb17-44f8-9898-c41e0cff56bf" containerName="mariadb-account-create-update" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.340621 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="529f3b7f-281a-4cd3-a0be-885fc730c789" containerName="mariadb-database-create" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.340639 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="346898dc-db0f-4f45-aa32-d4234d759042" containerName="mariadb-account-create-update" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.340650 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="2984f7e2-f590-4d66-ab1b-76ee8d3a7869" containerName="mariadb-account-create-update" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.340669 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb305e99-aa29-41e8-97de-f49f2fdd8e7b" containerName="mariadb-database-create" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.340678 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e1d2c4-624d-42d8-93fc-d203ec6a9c0f" containerName="mariadb-database-create" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.341415 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.343891 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-drz8r" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.346206 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.357678 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj"] Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.361652 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.400994 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-hjfhj\" (UID: \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.401135 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-hjfhj\" (UID: \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.401191 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j82qc\" (UniqueName: \"kubernetes.io/projected/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-kube-api-access-j82qc\") pod \"nova-kuttl-cell0-conductor-db-sync-hjfhj\" (UID: \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.502842 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-hjfhj\" (UID: \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.502961 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-hjfhj\" (UID: \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.503008 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j82qc\" (UniqueName: \"kubernetes.io/projected/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-kube-api-access-j82qc\") pod \"nova-kuttl-cell0-conductor-db-sync-hjfhj\" (UID: \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.516727 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-hjfhj\" (UID: \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.524410 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j82qc\" (UniqueName: \"kubernetes.io/projected/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-kube-api-access-j82qc\") pod \"nova-kuttl-cell0-conductor-db-sync-hjfhj\" (UID: \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.530367 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-hjfhj\" (UID: \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" Jan 30 17:15:28 crc kubenswrapper[4875]: I0130 17:15:28.658843 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" Jan 30 17:15:29 crc kubenswrapper[4875]: I0130 17:15:29.083732 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj"] Jan 30 17:15:29 crc kubenswrapper[4875]: W0130 17:15:29.090838 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6aa5cab_8934_4528_ab4b_0e2e08cb67b0.slice/crio-7c45680078f05f3999f433224839a42ef3293f2c1a4f32b16e9d6d678290b8d4 WatchSource:0}: Error finding container 7c45680078f05f3999f433224839a42ef3293f2c1a4f32b16e9d6d678290b8d4: Status 404 returned error can't find the container with id 7c45680078f05f3999f433224839a42ef3293f2c1a4f32b16e9d6d678290b8d4 Jan 30 17:15:29 crc kubenswrapper[4875]: I0130 17:15:29.719842 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" event={"ID":"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0","Type":"ContainerStarted","Data":"7c45680078f05f3999f433224839a42ef3293f2c1a4f32b16e9d6d678290b8d4"} Jan 30 17:15:37 crc kubenswrapper[4875]: I0130 17:15:37.808515 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" event={"ID":"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0","Type":"ContainerStarted","Data":"1ee145f8190b14013fee6b7110901c003ecc5b37c2438d9ccb09e3440982d394"} Jan 30 17:15:46 crc kubenswrapper[4875]: I0130 17:15:46.884446 4875 generic.go:334] "Generic (PLEG): container finished" podID="f6aa5cab-8934-4528-ab4b-0e2e08cb67b0" containerID="1ee145f8190b14013fee6b7110901c003ecc5b37c2438d9ccb09e3440982d394" exitCode=0 Jan 30 17:15:46 crc kubenswrapper[4875]: I0130 17:15:46.884521 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" event={"ID":"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0","Type":"ContainerDied","Data":"1ee145f8190b14013fee6b7110901c003ecc5b37c2438d9ccb09e3440982d394"} Jan 30 17:15:48 crc kubenswrapper[4875]: I0130 17:15:48.194761 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" Jan 30 17:15:48 crc kubenswrapper[4875]: I0130 17:15:48.307799 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-config-data\") pod \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\" (UID: \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\") " Jan 30 17:15:48 crc kubenswrapper[4875]: I0130 17:15:48.307888 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j82qc\" (UniqueName: \"kubernetes.io/projected/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-kube-api-access-j82qc\") pod \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\" (UID: \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\") " Jan 30 17:15:48 crc kubenswrapper[4875]: I0130 17:15:48.307970 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-scripts\") pod \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\" (UID: \"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0\") " Jan 30 17:15:48 crc kubenswrapper[4875]: I0130 17:15:48.313723 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-scripts" (OuterVolumeSpecName: "scripts") pod "f6aa5cab-8934-4528-ab4b-0e2e08cb67b0" (UID: "f6aa5cab-8934-4528-ab4b-0e2e08cb67b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:48 crc kubenswrapper[4875]: I0130 17:15:48.313977 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-kube-api-access-j82qc" (OuterVolumeSpecName: "kube-api-access-j82qc") pod "f6aa5cab-8934-4528-ab4b-0e2e08cb67b0" (UID: "f6aa5cab-8934-4528-ab4b-0e2e08cb67b0"). InnerVolumeSpecName "kube-api-access-j82qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:48 crc kubenswrapper[4875]: I0130 17:15:48.328960 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-config-data" (OuterVolumeSpecName: "config-data") pod "f6aa5cab-8934-4528-ab4b-0e2e08cb67b0" (UID: "f6aa5cab-8934-4528-ab4b-0e2e08cb67b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:48 crc kubenswrapper[4875]: I0130 17:15:48.409930 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:48 crc kubenswrapper[4875]: I0130 17:15:48.409965 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:48 crc kubenswrapper[4875]: I0130 17:15:48.409979 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j82qc\" (UniqueName: \"kubernetes.io/projected/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0-kube-api-access-j82qc\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:48 crc kubenswrapper[4875]: I0130 17:15:48.900707 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" event={"ID":"f6aa5cab-8934-4528-ab4b-0e2e08cb67b0","Type":"ContainerDied","Data":"7c45680078f05f3999f433224839a42ef3293f2c1a4f32b16e9d6d678290b8d4"} Jan 30 17:15:48 crc kubenswrapper[4875]: I0130 17:15:48.900738 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c45680078f05f3999f433224839a42ef3293f2c1a4f32b16e9d6d678290b8d4" Jan 30 17:15:48 crc kubenswrapper[4875]: I0130 17:15:48.900786 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.014320 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:15:49 crc kubenswrapper[4875]: E0130 17:15:49.014904 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6aa5cab-8934-4528-ab4b-0e2e08cb67b0" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.014922 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6aa5cab-8934-4528-ab4b-0e2e08cb67b0" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.015093 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6aa5cab-8934-4528-ab4b-0e2e08cb67b0" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.015515 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.018568 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-drz8r" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.018568 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.034803 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.119272 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae65aa7-4fcd-4724-90ba-2a70bcf7472b-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.119410 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8m2f\" (UniqueName: \"kubernetes.io/projected/3ae65aa7-4fcd-4724-90ba-2a70bcf7472b-kube-api-access-f8m2f\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.221044 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae65aa7-4fcd-4724-90ba-2a70bcf7472b-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.221178 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8m2f\" (UniqueName: \"kubernetes.io/projected/3ae65aa7-4fcd-4724-90ba-2a70bcf7472b-kube-api-access-f8m2f\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.225278 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae65aa7-4fcd-4724-90ba-2a70bcf7472b-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.235725 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8m2f\" (UniqueName: \"kubernetes.io/projected/3ae65aa7-4fcd-4724-90ba-2a70bcf7472b-kube-api-access-f8m2f\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.361206 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.801825 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:15:49 crc kubenswrapper[4875]: W0130 17:15:49.804379 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ae65aa7_4fcd_4724_90ba_2a70bcf7472b.slice/crio-bc2889586f1bd9bcd13d2d535b9e3e13a9a16dfac45aae4ffa2fb4ea286441a5 WatchSource:0}: Error finding container bc2889586f1bd9bcd13d2d535b9e3e13a9a16dfac45aae4ffa2fb4ea286441a5: Status 404 returned error can't find the container with id bc2889586f1bd9bcd13d2d535b9e3e13a9a16dfac45aae4ffa2fb4ea286441a5 Jan 30 17:15:49 crc kubenswrapper[4875]: I0130 17:15:49.908753 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b","Type":"ContainerStarted","Data":"bc2889586f1bd9bcd13d2d535b9e3e13a9a16dfac45aae4ffa2fb4ea286441a5"} Jan 30 17:15:50 crc kubenswrapper[4875]: I0130 17:15:50.918080 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b","Type":"ContainerStarted","Data":"a66b72db7b2fdec520f275371d478d2c4ac23db968ce7e3511f943e6a03e2735"} Jan 30 17:15:50 crc kubenswrapper[4875]: I0130 17:15:50.919550 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:15:50 crc kubenswrapper[4875]: I0130 17:15:50.943842 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.943820298 podStartE2EDuration="2.943820298s" podCreationTimestamp="2026-01-30 17:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:50.938819564 +0000 UTC m=+1161.486182957" watchObservedRunningTime="2026-01-30 17:15:50.943820298 +0000 UTC m=+1161.491183691" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.393851 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.797785 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd"] Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.799079 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.804177 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.804340 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.814987 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd"] Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.838042 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwzv6\" (UniqueName: \"kubernetes.io/projected/91d5408a-71a2-48dd-bc00-17f3aa048238-kube-api-access-dwzv6\") pod \"nova-kuttl-cell0-cell-mapping-rkspd\" (UID: \"91d5408a-71a2-48dd-bc00-17f3aa048238\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.838361 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d5408a-71a2-48dd-bc00-17f3aa048238-config-data\") pod \"nova-kuttl-cell0-cell-mapping-rkspd\" (UID: \"91d5408a-71a2-48dd-bc00-17f3aa048238\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.838447 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d5408a-71a2-48dd-bc00-17f3aa048238-scripts\") pod \"nova-kuttl-cell0-cell-mapping-rkspd\" (UID: \"91d5408a-71a2-48dd-bc00-17f3aa048238\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.936146 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.937693 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.939656 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.940784 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d5408a-71a2-48dd-bc00-17f3aa048238-scripts\") pod \"nova-kuttl-cell0-cell-mapping-rkspd\" (UID: \"91d5408a-71a2-48dd-bc00-17f3aa048238\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.940906 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwzv6\" (UniqueName: \"kubernetes.io/projected/91d5408a-71a2-48dd-bc00-17f3aa048238-kube-api-access-dwzv6\") pod \"nova-kuttl-cell0-cell-mapping-rkspd\" (UID: \"91d5408a-71a2-48dd-bc00-17f3aa048238\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.940954 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d5408a-71a2-48dd-bc00-17f3aa048238-config-data\") pod \"nova-kuttl-cell0-cell-mapping-rkspd\" (UID: \"91d5408a-71a2-48dd-bc00-17f3aa048238\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.948691 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d5408a-71a2-48dd-bc00-17f3aa048238-scripts\") pod \"nova-kuttl-cell0-cell-mapping-rkspd\" (UID: \"91d5408a-71a2-48dd-bc00-17f3aa048238\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.949366 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d5408a-71a2-48dd-bc00-17f3aa048238-config-data\") pod \"nova-kuttl-cell0-cell-mapping-rkspd\" (UID: \"91d5408a-71a2-48dd-bc00-17f3aa048238\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.954969 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:15:54 crc kubenswrapper[4875]: I0130 17:15:54.991321 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwzv6\" (UniqueName: \"kubernetes.io/projected/91d5408a-71a2-48dd-bc00-17f3aa048238-kube-api-access-dwzv6\") pod \"nova-kuttl-cell0-cell-mapping-rkspd\" (UID: \"91d5408a-71a2-48dd-bc00-17f3aa048238\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.025114 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.026058 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.028403 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.042173 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb34589-d4ad-4995-afe2-d3181b3c5039-config-data\") pod \"nova-kuttl-api-0\" (UID: \"feb34589-d4ad-4995-afe2-d3181b3c5039\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.042220 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdcjx\" (UniqueName: \"kubernetes.io/projected/feb34589-d4ad-4995-afe2-d3181b3c5039-kube-api-access-kdcjx\") pod \"nova-kuttl-api-0\" (UID: \"feb34589-d4ad-4995-afe2-d3181b3c5039\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.042239 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/feb34589-d4ad-4995-afe2-d3181b3c5039-logs\") pod \"nova-kuttl-api-0\" (UID: \"feb34589-d4ad-4995-afe2-d3181b3c5039\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.045009 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.117683 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.120890 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.122233 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.130223 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.148305 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7jbq\" (UniqueName: \"kubernetes.io/projected/e576a578-9108-4a1d-b61d-e004ebac31d8-kube-api-access-h7jbq\") pod \"nova-kuttl-scheduler-0\" (UID: \"e576a578-9108-4a1d-b61d-e004ebac31d8\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.148368 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e576a578-9108-4a1d-b61d-e004ebac31d8-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"e576a578-9108-4a1d-b61d-e004ebac31d8\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.148400 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb34589-d4ad-4995-afe2-d3181b3c5039-config-data\") pod \"nova-kuttl-api-0\" (UID: \"feb34589-d4ad-4995-afe2-d3181b3c5039\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.148436 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdcjx\" (UniqueName: \"kubernetes.io/projected/feb34589-d4ad-4995-afe2-d3181b3c5039-kube-api-access-kdcjx\") pod \"nova-kuttl-api-0\" (UID: \"feb34589-d4ad-4995-afe2-d3181b3c5039\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.148460 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/feb34589-d4ad-4995-afe2-d3181b3c5039-logs\") pod \"nova-kuttl-api-0\" (UID: \"feb34589-d4ad-4995-afe2-d3181b3c5039\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.148966 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/feb34589-d4ad-4995-afe2-d3181b3c5039-logs\") pod \"nova-kuttl-api-0\" (UID: \"feb34589-d4ad-4995-afe2-d3181b3c5039\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.153441 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb34589-d4ad-4995-afe2-d3181b3c5039-config-data\") pod \"nova-kuttl-api-0\" (UID: \"feb34589-d4ad-4995-afe2-d3181b3c5039\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.166844 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.173819 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdcjx\" (UniqueName: \"kubernetes.io/projected/feb34589-d4ad-4995-afe2-d3181b3c5039-kube-api-access-kdcjx\") pod \"nova-kuttl-api-0\" (UID: \"feb34589-d4ad-4995-afe2-d3181b3c5039\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.177119 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.178295 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.189265 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.221222 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.249954 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghngw\" (UniqueName: \"kubernetes.io/projected/4bbb2c92-8124-49f3-b278-c77b4b0d8a52-kube-api-access-ghngw\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"4bbb2c92-8124-49f3-b278-c77b4b0d8a52\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.250448 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28d7c084-3d5d-4561-ab0d-762245e20fd8-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"28d7c084-3d5d-4561-ab0d-762245e20fd8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.250976 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d7c084-3d5d-4561-ab0d-762245e20fd8-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"28d7c084-3d5d-4561-ab0d-762245e20fd8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.251012 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e576a578-9108-4a1d-b61d-e004ebac31d8-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"e576a578-9108-4a1d-b61d-e004ebac31d8\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.251156 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m99lp\" (UniqueName: \"kubernetes.io/projected/28d7c084-3d5d-4561-ab0d-762245e20fd8-kube-api-access-m99lp\") pod \"nova-kuttl-metadata-0\" (UID: \"28d7c084-3d5d-4561-ab0d-762245e20fd8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.251212 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbb2c92-8124-49f3-b278-c77b4b0d8a52-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"4bbb2c92-8124-49f3-b278-c77b4b0d8a52\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.251266 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7jbq\" (UniqueName: \"kubernetes.io/projected/e576a578-9108-4a1d-b61d-e004ebac31d8-kube-api-access-h7jbq\") pod \"nova-kuttl-scheduler-0\" (UID: \"e576a578-9108-4a1d-b61d-e004ebac31d8\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.258644 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e576a578-9108-4a1d-b61d-e004ebac31d8-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"e576a578-9108-4a1d-b61d-e004ebac31d8\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.282565 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7jbq\" (UniqueName: \"kubernetes.io/projected/e576a578-9108-4a1d-b61d-e004ebac31d8-kube-api-access-h7jbq\") pod \"nova-kuttl-scheduler-0\" (UID: \"e576a578-9108-4a1d-b61d-e004ebac31d8\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.335641 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.345161 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.353010 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m99lp\" (UniqueName: \"kubernetes.io/projected/28d7c084-3d5d-4561-ab0d-762245e20fd8-kube-api-access-m99lp\") pod \"nova-kuttl-metadata-0\" (UID: \"28d7c084-3d5d-4561-ab0d-762245e20fd8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.353083 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbb2c92-8124-49f3-b278-c77b4b0d8a52-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"4bbb2c92-8124-49f3-b278-c77b4b0d8a52\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.353132 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghngw\" (UniqueName: \"kubernetes.io/projected/4bbb2c92-8124-49f3-b278-c77b4b0d8a52-kube-api-access-ghngw\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"4bbb2c92-8124-49f3-b278-c77b4b0d8a52\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.353177 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28d7c084-3d5d-4561-ab0d-762245e20fd8-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"28d7c084-3d5d-4561-ab0d-762245e20fd8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.353194 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d7c084-3d5d-4561-ab0d-762245e20fd8-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"28d7c084-3d5d-4561-ab0d-762245e20fd8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.355964 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d7c084-3d5d-4561-ab0d-762245e20fd8-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"28d7c084-3d5d-4561-ab0d-762245e20fd8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.358557 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28d7c084-3d5d-4561-ab0d-762245e20fd8-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"28d7c084-3d5d-4561-ab0d-762245e20fd8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.359732 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbb2c92-8124-49f3-b278-c77b4b0d8a52-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"4bbb2c92-8124-49f3-b278-c77b4b0d8a52\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.372786 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m99lp\" (UniqueName: \"kubernetes.io/projected/28d7c084-3d5d-4561-ab0d-762245e20fd8-kube-api-access-m99lp\") pod \"nova-kuttl-metadata-0\" (UID: \"28d7c084-3d5d-4561-ab0d-762245e20fd8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.379783 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghngw\" (UniqueName: \"kubernetes.io/projected/4bbb2c92-8124-49f3-b278-c77b4b0d8a52-kube-api-access-ghngw\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"4bbb2c92-8124-49f3-b278-c77b4b0d8a52\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.402170 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd"] Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.549536 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.557242 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.700814 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc"] Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.702147 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.706893 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.707172 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.730523 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc"] Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.759683 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrx8b\" (UniqueName: \"kubernetes.io/projected/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-kube-api-access-lrx8b\") pod \"nova-kuttl-cell1-conductor-db-sync-w4dqc\" (UID: \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.759772 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-w4dqc\" (UID: \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.760078 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-w4dqc\" (UID: \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.799440 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.810993 4875 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:15:55 crc kubenswrapper[4875]: W0130 17:15:55.860813 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeb34589_d4ad_4995_afe2_d3181b3c5039.slice/crio-36ecfa9b0b4c553ec8936523239b4e98775bcb3799368c9a6766af0b36c3caa2 WatchSource:0}: Error finding container 36ecfa9b0b4c553ec8936523239b4e98775bcb3799368c9a6766af0b36c3caa2: Status 404 returned error can't find the container with id 36ecfa9b0b4c553ec8936523239b4e98775bcb3799368c9a6766af0b36c3caa2 Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.870877 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-w4dqc\" (UID: \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.870960 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrx8b\" (UniqueName: \"kubernetes.io/projected/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-kube-api-access-lrx8b\") pod \"nova-kuttl-cell1-conductor-db-sync-w4dqc\" (UID: \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.871024 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-w4dqc\" (UID: \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.877279 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-w4dqc\" (UID: \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.878440 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-w4dqc\" (UID: \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.879514 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.893219 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrx8b\" (UniqueName: \"kubernetes.io/projected/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-kube-api-access-lrx8b\") pod \"nova-kuttl-cell1-conductor-db-sync-w4dqc\" (UID: \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.971670 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"feb34589-d4ad-4995-afe2-d3181b3c5039","Type":"ContainerStarted","Data":"36ecfa9b0b4c553ec8936523239b4e98775bcb3799368c9a6766af0b36c3caa2"} Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.972926 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e576a578-9108-4a1d-b61d-e004ebac31d8","Type":"ContainerStarted","Data":"9859bd2f710054184f44b6ef7c1a7494c13664d3ffcd7372cd469b8f91643c0d"} Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.974154 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" event={"ID":"91d5408a-71a2-48dd-bc00-17f3aa048238","Type":"ContainerStarted","Data":"f9d9d03c31bfca34e1b4bec070d4a6098da638e2f1003d06b11810b6207aa4ca"} Jan 30 17:15:55 crc kubenswrapper[4875]: I0130 17:15:55.974198 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" event={"ID":"91d5408a-71a2-48dd-bc00-17f3aa048238","Type":"ContainerStarted","Data":"c825e54265c417885b8182f4fa20f2e33e4f168836225d7f23ec00503b6abbe7"} Jan 30 17:15:56 crc kubenswrapper[4875]: I0130 17:15:56.000251 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" podStartSLOduration=2.000230825 podStartE2EDuration="2.000230825s" podCreationTimestamp="2026-01-30 17:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:55.999885254 +0000 UTC m=+1166.547248667" watchObservedRunningTime="2026-01-30 17:15:56.000230825 +0000 UTC m=+1166.547594208" Jan 30 17:15:56 crc kubenswrapper[4875]: I0130 17:15:56.023781 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" Jan 30 17:15:56 crc kubenswrapper[4875]: W0130 17:15:56.058903 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bbb2c92_8124_49f3_b278_c77b4b0d8a52.slice/crio-b64fd953101cd86d44712244f67dc55d1adf41e4e4345c8c67d927e44aa1b819 WatchSource:0}: Error finding container b64fd953101cd86d44712244f67dc55d1adf41e4e4345c8c67d927e44aa1b819: Status 404 returned error can't find the container with id b64fd953101cd86d44712244f67dc55d1adf41e4e4345c8c67d927e44aa1b819 Jan 30 17:15:56 crc kubenswrapper[4875]: I0130 17:15:56.068335 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:15:56 crc kubenswrapper[4875]: I0130 17:15:56.110247 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:15:56 crc kubenswrapper[4875]: W0130 17:15:56.121211 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28d7c084_3d5d_4561_ab0d_762245e20fd8.slice/crio-68d7852c92244acd570355a990fa908306639d56df11bbdcea883ca189453fb6 WatchSource:0}: Error finding container 68d7852c92244acd570355a990fa908306639d56df11bbdcea883ca189453fb6: Status 404 returned error can't find the container with id 68d7852c92244acd570355a990fa908306639d56df11bbdcea883ca189453fb6 Jan 30 17:15:56 crc kubenswrapper[4875]: I0130 17:15:56.464131 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc"] Jan 30 17:15:56 crc kubenswrapper[4875]: I0130 17:15:56.985838 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" event={"ID":"2e90ae8f-59a8-45bd-8d17-8f09cec682c3","Type":"ContainerStarted","Data":"f42eb5c44f3af398b19cebfcaa54889d1c35331fafe5e8419b0a5ace7c57a44e"} Jan 30 17:15:56 crc kubenswrapper[4875]: I0130 17:15:56.986479 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" event={"ID":"2e90ae8f-59a8-45bd-8d17-8f09cec682c3","Type":"ContainerStarted","Data":"443c9ff48c0262f7673d29a477916ee6bf013b1497eb3093af1811207d563641"} Jan 30 17:15:56 crc kubenswrapper[4875]: I0130 17:15:56.992503 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"28d7c084-3d5d-4561-ab0d-762245e20fd8","Type":"ContainerStarted","Data":"68d7852c92244acd570355a990fa908306639d56df11bbdcea883ca189453fb6"} Jan 30 17:15:57 crc kubenswrapper[4875]: I0130 17:15:56.999934 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"4bbb2c92-8124-49f3-b278-c77b4b0d8a52","Type":"ContainerStarted","Data":"b64fd953101cd86d44712244f67dc55d1adf41e4e4345c8c67d927e44aa1b819"} Jan 30 17:15:57 crc kubenswrapper[4875]: I0130 17:15:57.011065 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" podStartSLOduration=2.011042425 podStartE2EDuration="2.011042425s" podCreationTimestamp="2026-01-30 17:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:57.006464157 +0000 UTC m=+1167.553827560" watchObservedRunningTime="2026-01-30 17:15:57.011042425 +0000 UTC m=+1167.558405808" Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.025381 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"feb34589-d4ad-4995-afe2-d3181b3c5039","Type":"ContainerStarted","Data":"07ba2f3cbebfc4161fdc5776e497f5046353a5fa6fb9819890a6b7946964317a"} Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.025919 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"feb34589-d4ad-4995-afe2-d3181b3c5039","Type":"ContainerStarted","Data":"1e83de1b7f61f07599ccbaf976bcbc5fd24f2a9880b8f85bdd6022887f1fe9a7"} Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.028950 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"28d7c084-3d5d-4561-ab0d-762245e20fd8","Type":"ContainerStarted","Data":"0d06a3743b74578541666e14e8646f0fae58ad04e1b0fb99288acb32e287b4db"} Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.029006 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"28d7c084-3d5d-4561-ab0d-762245e20fd8","Type":"ContainerStarted","Data":"282fd870ceb26b85669fa9c0ac8ec88c59b023037d0c7bdf656d562ebf09ee4a"} Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.033072 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e576a578-9108-4a1d-b61d-e004ebac31d8","Type":"ContainerStarted","Data":"cdbec8afbf33fc9ce70308847bf48994d3fa70a26ef4925133fd3a6994f5de1b"} Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.035718 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"4bbb2c92-8124-49f3-b278-c77b4b0d8a52","Type":"ContainerStarted","Data":"7a7132bca0906b89335d0aa2d3779663174b8deed679ac2fa5adaf9404077c1d"} Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.072224 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=1.730370854 podStartE2EDuration="5.072200427s" podCreationTimestamp="2026-01-30 17:15:55 +0000 UTC" firstStartedPulling="2026-01-30 17:15:56.130601212 +0000 UTC m=+1166.677964595" lastFinishedPulling="2026-01-30 17:15:59.472430785 +0000 UTC m=+1170.019794168" observedRunningTime="2026-01-30 17:16:00.066457938 +0000 UTC m=+1170.613821331" watchObservedRunningTime="2026-01-30 17:16:00.072200427 +0000 UTC m=+1170.619563810" Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.076517 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.4365367669999998 podStartE2EDuration="6.076500675s" podCreationTimestamp="2026-01-30 17:15:54 +0000 UTC" firstStartedPulling="2026-01-30 17:15:55.866602197 +0000 UTC m=+1166.413965580" lastFinishedPulling="2026-01-30 17:15:59.506566085 +0000 UTC m=+1170.053929488" observedRunningTime="2026-01-30 17:16:00.051000923 +0000 UTC m=+1170.598364316" watchObservedRunningTime="2026-01-30 17:16:00.076500675 +0000 UTC m=+1170.623864048" Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.086956 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=1.6733597740000001 podStartE2EDuration="5.086937486s" podCreationTimestamp="2026-01-30 17:15:55 +0000 UTC" firstStartedPulling="2026-01-30 17:15:56.063858956 +0000 UTC m=+1166.611222329" lastFinishedPulling="2026-01-30 17:15:59.477436658 +0000 UTC m=+1170.024800041" observedRunningTime="2026-01-30 17:16:00.081764567 +0000 UTC m=+1170.629127950" watchObservedRunningTime="2026-01-30 17:16:00.086937486 +0000 UTC m=+1170.634300869" Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.102259 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.438913779 podStartE2EDuration="5.102240165s" podCreationTimestamp="2026-01-30 17:15:55 +0000 UTC" firstStartedPulling="2026-01-30 17:15:55.810741756 +0000 UTC m=+1166.358105139" lastFinishedPulling="2026-01-30 17:15:59.474068142 +0000 UTC m=+1170.021431525" observedRunningTime="2026-01-30 17:16:00.096332271 +0000 UTC m=+1170.643695654" watchObservedRunningTime="2026-01-30 17:16:00.102240165 +0000 UTC m=+1170.649603558" Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.345564 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.549785 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.550496 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:00 crc kubenswrapper[4875]: I0130 17:16:00.558007 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:16:04 crc kubenswrapper[4875]: I0130 17:16:04.066945 4875 generic.go:334] "Generic (PLEG): container finished" podID="91d5408a-71a2-48dd-bc00-17f3aa048238" containerID="f9d9d03c31bfca34e1b4bec070d4a6098da638e2f1003d06b11810b6207aa4ca" exitCode=0 Jan 30 17:16:04 crc kubenswrapper[4875]: I0130 17:16:04.067039 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" event={"ID":"91d5408a-71a2-48dd-bc00-17f3aa048238","Type":"ContainerDied","Data":"f9d9d03c31bfca34e1b4bec070d4a6098da638e2f1003d06b11810b6207aa4ca"} Jan 30 17:16:04 crc kubenswrapper[4875]: I0130 17:16:04.068910 4875 generic.go:334] "Generic (PLEG): container finished" podID="2e90ae8f-59a8-45bd-8d17-8f09cec682c3" containerID="f42eb5c44f3af398b19cebfcaa54889d1c35331fafe5e8419b0a5ace7c57a44e" exitCode=0 Jan 30 17:16:04 crc kubenswrapper[4875]: I0130 17:16:04.068941 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" event={"ID":"2e90ae8f-59a8-45bd-8d17-8f09cec682c3","Type":"ContainerDied","Data":"f42eb5c44f3af398b19cebfcaa54889d1c35331fafe5e8419b0a5ace7c57a44e"} Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.336685 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.336904 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.351960 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.377615 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.506725 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.517847 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.552842 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.552884 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.557408 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.573488 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.578871 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-config-data\") pod \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\" (UID: \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\") " Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.578938 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-scripts\") pod \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\" (UID: \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\") " Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.579042 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrx8b\" (UniqueName: \"kubernetes.io/projected/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-kube-api-access-lrx8b\") pod \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\" (UID: \"2e90ae8f-59a8-45bd-8d17-8f09cec682c3\") " Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.590809 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-scripts" (OuterVolumeSpecName: "scripts") pod "2e90ae8f-59a8-45bd-8d17-8f09cec682c3" (UID: "2e90ae8f-59a8-45bd-8d17-8f09cec682c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.591330 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-kube-api-access-lrx8b" (OuterVolumeSpecName: "kube-api-access-lrx8b") pod "2e90ae8f-59a8-45bd-8d17-8f09cec682c3" (UID: "2e90ae8f-59a8-45bd-8d17-8f09cec682c3"). InnerVolumeSpecName "kube-api-access-lrx8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.608484 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-config-data" (OuterVolumeSpecName: "config-data") pod "2e90ae8f-59a8-45bd-8d17-8f09cec682c3" (UID: "2e90ae8f-59a8-45bd-8d17-8f09cec682c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.680825 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwzv6\" (UniqueName: \"kubernetes.io/projected/91d5408a-71a2-48dd-bc00-17f3aa048238-kube-api-access-dwzv6\") pod \"91d5408a-71a2-48dd-bc00-17f3aa048238\" (UID: \"91d5408a-71a2-48dd-bc00-17f3aa048238\") " Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.680943 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d5408a-71a2-48dd-bc00-17f3aa048238-config-data\") pod \"91d5408a-71a2-48dd-bc00-17f3aa048238\" (UID: \"91d5408a-71a2-48dd-bc00-17f3aa048238\") " Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.680991 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d5408a-71a2-48dd-bc00-17f3aa048238-scripts\") pod \"91d5408a-71a2-48dd-bc00-17f3aa048238\" (UID: \"91d5408a-71a2-48dd-bc00-17f3aa048238\") " Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.681533 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.681559 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.681572 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrx8b\" (UniqueName: \"kubernetes.io/projected/2e90ae8f-59a8-45bd-8d17-8f09cec682c3-kube-api-access-lrx8b\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.683987 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d5408a-71a2-48dd-bc00-17f3aa048238-kube-api-access-dwzv6" (OuterVolumeSpecName: "kube-api-access-dwzv6") pod "91d5408a-71a2-48dd-bc00-17f3aa048238" (UID: "91d5408a-71a2-48dd-bc00-17f3aa048238"). InnerVolumeSpecName "kube-api-access-dwzv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.686234 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d5408a-71a2-48dd-bc00-17f3aa048238-scripts" (OuterVolumeSpecName: "scripts") pod "91d5408a-71a2-48dd-bc00-17f3aa048238" (UID: "91d5408a-71a2-48dd-bc00-17f3aa048238"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.700829 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d5408a-71a2-48dd-bc00-17f3aa048238-config-data" (OuterVolumeSpecName: "config-data") pod "91d5408a-71a2-48dd-bc00-17f3aa048238" (UID: "91d5408a-71a2-48dd-bc00-17f3aa048238"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.783631 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwzv6\" (UniqueName: \"kubernetes.io/projected/91d5408a-71a2-48dd-bc00-17f3aa048238-kube-api-access-dwzv6\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.783678 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d5408a-71a2-48dd-bc00-17f3aa048238-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:05 crc kubenswrapper[4875]: I0130 17:16:05.783696 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91d5408a-71a2-48dd-bc00-17f3aa048238-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.086480 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.086732 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd" event={"ID":"91d5408a-71a2-48dd-bc00-17f3aa048238","Type":"ContainerDied","Data":"c825e54265c417885b8182f4fa20f2e33e4f168836225d7f23ec00503b6abbe7"} Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.086794 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c825e54265c417885b8182f4fa20f2e33e4f168836225d7f23ec00503b6abbe7" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.089532 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.094325 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc" event={"ID":"2e90ae8f-59a8-45bd-8d17-8f09cec682c3","Type":"ContainerDied","Data":"443c9ff48c0262f7673d29a477916ee6bf013b1497eb3093af1811207d563641"} Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.094365 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="443c9ff48c0262f7673d29a477916ee6bf013b1497eb3093af1811207d563641" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.118574 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.138140 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.222502 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:16:06 crc kubenswrapper[4875]: E0130 17:16:06.222818 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d5408a-71a2-48dd-bc00-17f3aa048238" containerName="nova-manage" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.222832 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d5408a-71a2-48dd-bc00-17f3aa048238" containerName="nova-manage" Jan 30 17:16:06 crc kubenswrapper[4875]: E0130 17:16:06.222858 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e90ae8f-59a8-45bd-8d17-8f09cec682c3" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.222863 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e90ae8f-59a8-45bd-8d17-8f09cec682c3" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.223014 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e90ae8f-59a8-45bd-8d17-8f09cec682c3" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.223042 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d5408a-71a2-48dd-bc00-17f3aa048238" containerName="nova-manage" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.223517 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.225685 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.241929 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.293363 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qzl7\" (UniqueName: \"kubernetes.io/projected/43fc7a38-c949-4c28-8449-f23a5224cf13-kube-api-access-5qzl7\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"43fc7a38-c949-4c28-8449-f23a5224cf13\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.293420 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43fc7a38-c949-4c28-8449-f23a5224cf13-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"43fc7a38-c949-4c28-8449-f23a5224cf13\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.405540 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.405901 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="feb34589-d4ad-4995-afe2-d3181b3c5039" containerName="nova-kuttl-api-api" containerID="cri-o://07ba2f3cbebfc4161fdc5776e497f5046353a5fa6fb9819890a6b7946964317a" gracePeriod=30 Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.405814 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="feb34589-d4ad-4995-afe2-d3181b3c5039" containerName="nova-kuttl-api-log" containerID="cri-o://1e83de1b7f61f07599ccbaf976bcbc5fd24f2a9880b8f85bdd6022887f1fe9a7" gracePeriod=30 Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.408133 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qzl7\" (UniqueName: \"kubernetes.io/projected/43fc7a38-c949-4c28-8449-f23a5224cf13-kube-api-access-5qzl7\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"43fc7a38-c949-4c28-8449-f23a5224cf13\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.408200 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43fc7a38-c949-4c28-8449-f23a5224cf13-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"43fc7a38-c949-4c28-8449-f23a5224cf13\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.415498 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="feb34589-d4ad-4995-afe2-d3181b3c5039" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.130:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.415630 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="feb34589-d4ad-4995-afe2-d3181b3c5039" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.130:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.426654 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43fc7a38-c949-4c28-8449-f23a5224cf13-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"43fc7a38-c949-4c28-8449-f23a5224cf13\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.435163 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qzl7\" (UniqueName: \"kubernetes.io/projected/43fc7a38-c949-4c28-8449-f23a5224cf13-kube-api-access-5qzl7\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"43fc7a38-c949-4c28-8449-f23a5224cf13\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.453779 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.454203 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="28d7c084-3d5d-4561-ab0d-762245e20fd8" containerName="nova-kuttl-metadata-log" containerID="cri-o://282fd870ceb26b85669fa9c0ac8ec88c59b023037d0c7bdf656d562ebf09ee4a" gracePeriod=30 Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.454314 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="28d7c084-3d5d-4561-ab0d-762245e20fd8" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://0d06a3743b74578541666e14e8646f0fae58ad04e1b0fb99288acb32e287b4db" gracePeriod=30 Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.486805 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="28d7c084-3d5d-4561-ab0d-762245e20fd8" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.132:8775/\": EOF" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.486816 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="28d7c084-3d5d-4561-ab0d-762245e20fd8" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.132:8775/\": EOF" Jan 30 17:16:06 crc kubenswrapper[4875]: E0130 17:16:06.533348 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28d7c084_3d5d_4561_ab0d_762245e20fd8.slice/crio-conmon-282fd870ceb26b85669fa9c0ac8ec88c59b023037d0c7bdf656d562ebf09ee4a.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.558955 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:16:06 crc kubenswrapper[4875]: I0130 17:16:06.719277 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:16:07 crc kubenswrapper[4875]: I0130 17:16:07.020240 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:16:07 crc kubenswrapper[4875]: W0130 17:16:07.024466 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43fc7a38_c949_4c28_8449_f23a5224cf13.slice/crio-c138b0440039255318a724fb6f2cd14df2d68449586628a89c9a4d78b63cefc8 WatchSource:0}: Error finding container c138b0440039255318a724fb6f2cd14df2d68449586628a89c9a4d78b63cefc8: Status 404 returned error can't find the container with id c138b0440039255318a724fb6f2cd14df2d68449586628a89c9a4d78b63cefc8 Jan 30 17:16:07 crc kubenswrapper[4875]: I0130 17:16:07.098076 4875 generic.go:334] "Generic (PLEG): container finished" podID="28d7c084-3d5d-4561-ab0d-762245e20fd8" containerID="282fd870ceb26b85669fa9c0ac8ec88c59b023037d0c7bdf656d562ebf09ee4a" exitCode=143 Jan 30 17:16:07 crc kubenswrapper[4875]: I0130 17:16:07.098154 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"28d7c084-3d5d-4561-ab0d-762245e20fd8","Type":"ContainerDied","Data":"282fd870ceb26b85669fa9c0ac8ec88c59b023037d0c7bdf656d562ebf09ee4a"} Jan 30 17:16:07 crc kubenswrapper[4875]: I0130 17:16:07.100230 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"43fc7a38-c949-4c28-8449-f23a5224cf13","Type":"ContainerStarted","Data":"c138b0440039255318a724fb6f2cd14df2d68449586628a89c9a4d78b63cefc8"} Jan 30 17:16:07 crc kubenswrapper[4875]: I0130 17:16:07.102644 4875 generic.go:334] "Generic (PLEG): container finished" podID="feb34589-d4ad-4995-afe2-d3181b3c5039" containerID="1e83de1b7f61f07599ccbaf976bcbc5fd24f2a9880b8f85bdd6022887f1fe9a7" exitCode=143 Jan 30 17:16:07 crc kubenswrapper[4875]: I0130 17:16:07.102712 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"feb34589-d4ad-4995-afe2-d3181b3c5039","Type":"ContainerDied","Data":"1e83de1b7f61f07599ccbaf976bcbc5fd24f2a9880b8f85bdd6022887f1fe9a7"} Jan 30 17:16:08 crc kubenswrapper[4875]: I0130 17:16:08.113937 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"43fc7a38-c949-4c28-8449-f23a5224cf13","Type":"ContainerStarted","Data":"005280cd9973d176e553373f86085b3bbe6de5fe194b3b8b97e602f815daf506"} Jan 30 17:16:08 crc kubenswrapper[4875]: I0130 17:16:08.113996 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="e576a578-9108-4a1d-b61d-e004ebac31d8" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://cdbec8afbf33fc9ce70308847bf48994d3fa70a26ef4925133fd3a6994f5de1b" gracePeriod=30 Jan 30 17:16:08 crc kubenswrapper[4875]: I0130 17:16:08.114388 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:16:08 crc kubenswrapper[4875]: I0130 17:16:08.135950 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.135930036 podStartE2EDuration="2.135930036s" podCreationTimestamp="2026-01-30 17:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:08.132990374 +0000 UTC m=+1178.680353787" watchObservedRunningTime="2026-01-30 17:16:08.135930036 +0000 UTC m=+1178.683293419" Jan 30 17:16:10 crc kubenswrapper[4875]: E0130 17:16:10.347950 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdbec8afbf33fc9ce70308847bf48994d3fa70a26ef4925133fd3a6994f5de1b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:16:10 crc kubenswrapper[4875]: E0130 17:16:10.351118 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdbec8afbf33fc9ce70308847bf48994d3fa70a26ef4925133fd3a6994f5de1b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:16:10 crc kubenswrapper[4875]: E0130 17:16:10.352732 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdbec8afbf33fc9ce70308847bf48994d3fa70a26ef4925133fd3a6994f5de1b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:16:10 crc kubenswrapper[4875]: E0130 17:16:10.352766 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="e576a578-9108-4a1d-b61d-e004ebac31d8" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.146802 4875 generic.go:334] "Generic (PLEG): container finished" podID="28d7c084-3d5d-4561-ab0d-762245e20fd8" containerID="0d06a3743b74578541666e14e8646f0fae58ad04e1b0fb99288acb32e287b4db" exitCode=0 Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.149101 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"28d7c084-3d5d-4561-ab0d-762245e20fd8","Type":"ContainerDied","Data":"0d06a3743b74578541666e14e8646f0fae58ad04e1b0fb99288acb32e287b4db"} Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.154855 4875 generic.go:334] "Generic (PLEG): container finished" podID="e576a578-9108-4a1d-b61d-e004ebac31d8" containerID="cdbec8afbf33fc9ce70308847bf48994d3fa70a26ef4925133fd3a6994f5de1b" exitCode=0 Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.154892 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e576a578-9108-4a1d-b61d-e004ebac31d8","Type":"ContainerDied","Data":"cdbec8afbf33fc9ce70308847bf48994d3fa70a26ef4925133fd3a6994f5de1b"} Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.268289 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.388372 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d7c084-3d5d-4561-ab0d-762245e20fd8-config-data\") pod \"28d7c084-3d5d-4561-ab0d-762245e20fd8\" (UID: \"28d7c084-3d5d-4561-ab0d-762245e20fd8\") " Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.388462 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m99lp\" (UniqueName: \"kubernetes.io/projected/28d7c084-3d5d-4561-ab0d-762245e20fd8-kube-api-access-m99lp\") pod \"28d7c084-3d5d-4561-ab0d-762245e20fd8\" (UID: \"28d7c084-3d5d-4561-ab0d-762245e20fd8\") " Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.388508 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28d7c084-3d5d-4561-ab0d-762245e20fd8-logs\") pod \"28d7c084-3d5d-4561-ab0d-762245e20fd8\" (UID: \"28d7c084-3d5d-4561-ab0d-762245e20fd8\") " Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.389366 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28d7c084-3d5d-4561-ab0d-762245e20fd8-logs" (OuterVolumeSpecName: "logs") pod "28d7c084-3d5d-4561-ab0d-762245e20fd8" (UID: "28d7c084-3d5d-4561-ab0d-762245e20fd8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.397443 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d7c084-3d5d-4561-ab0d-762245e20fd8-kube-api-access-m99lp" (OuterVolumeSpecName: "kube-api-access-m99lp") pod "28d7c084-3d5d-4561-ab0d-762245e20fd8" (UID: "28d7c084-3d5d-4561-ab0d-762245e20fd8"). InnerVolumeSpecName "kube-api-access-m99lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.413871 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d7c084-3d5d-4561-ab0d-762245e20fd8-config-data" (OuterVolumeSpecName: "config-data") pod "28d7c084-3d5d-4561-ab0d-762245e20fd8" (UID: "28d7c084-3d5d-4561-ab0d-762245e20fd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.454871 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.490857 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d7c084-3d5d-4561-ab0d-762245e20fd8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.491063 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m99lp\" (UniqueName: \"kubernetes.io/projected/28d7c084-3d5d-4561-ab0d-762245e20fd8-kube-api-access-m99lp\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.491123 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28d7c084-3d5d-4561-ab0d-762245e20fd8-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.592362 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e576a578-9108-4a1d-b61d-e004ebac31d8-config-data\") pod \"e576a578-9108-4a1d-b61d-e004ebac31d8\" (UID: \"e576a578-9108-4a1d-b61d-e004ebac31d8\") " Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.592407 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7jbq\" (UniqueName: \"kubernetes.io/projected/e576a578-9108-4a1d-b61d-e004ebac31d8-kube-api-access-h7jbq\") pod \"e576a578-9108-4a1d-b61d-e004ebac31d8\" (UID: \"e576a578-9108-4a1d-b61d-e004ebac31d8\") " Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.594945 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e576a578-9108-4a1d-b61d-e004ebac31d8-kube-api-access-h7jbq" (OuterVolumeSpecName: "kube-api-access-h7jbq") pod "e576a578-9108-4a1d-b61d-e004ebac31d8" (UID: "e576a578-9108-4a1d-b61d-e004ebac31d8"). InnerVolumeSpecName "kube-api-access-h7jbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.611308 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e576a578-9108-4a1d-b61d-e004ebac31d8-config-data" (OuterVolumeSpecName: "config-data") pod "e576a578-9108-4a1d-b61d-e004ebac31d8" (UID: "e576a578-9108-4a1d-b61d-e004ebac31d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.694336 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e576a578-9108-4a1d-b61d-e004ebac31d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:11 crc kubenswrapper[4875]: I0130 17:16:11.694371 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7jbq\" (UniqueName: \"kubernetes.io/projected/e576a578-9108-4a1d-b61d-e004ebac31d8-kube-api-access-h7jbq\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.165138 4875 generic.go:334] "Generic (PLEG): container finished" podID="feb34589-d4ad-4995-afe2-d3181b3c5039" containerID="07ba2f3cbebfc4161fdc5776e497f5046353a5fa6fb9819890a6b7946964317a" exitCode=0 Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.165349 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"feb34589-d4ad-4995-afe2-d3181b3c5039","Type":"ContainerDied","Data":"07ba2f3cbebfc4161fdc5776e497f5046353a5fa6fb9819890a6b7946964317a"} Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.167728 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"28d7c084-3d5d-4561-ab0d-762245e20fd8","Type":"ContainerDied","Data":"68d7852c92244acd570355a990fa908306639d56df11bbdcea883ca189453fb6"} Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.167757 4875 scope.go:117] "RemoveContainer" containerID="0d06a3743b74578541666e14e8646f0fae58ad04e1b0fb99288acb32e287b4db" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.167886 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.170973 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e576a578-9108-4a1d-b61d-e004ebac31d8","Type":"ContainerDied","Data":"9859bd2f710054184f44b6ef7c1a7494c13664d3ffcd7372cd469b8f91643c0d"} Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.171194 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.218732 4875 scope.go:117] "RemoveContainer" containerID="282fd870ceb26b85669fa9c0ac8ec88c59b023037d0c7bdf656d562ebf09ee4a" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.233489 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.240257 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.250232 4875 scope.go:117] "RemoveContainer" containerID="cdbec8afbf33fc9ce70308847bf48994d3fa70a26ef4925133fd3a6994f5de1b" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.265869 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.280457 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.285846 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:16:12 crc kubenswrapper[4875]: E0130 17:16:12.286175 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d7c084-3d5d-4561-ab0d-762245e20fd8" containerName="nova-kuttl-metadata-log" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.286191 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d7c084-3d5d-4561-ab0d-762245e20fd8" containerName="nova-kuttl-metadata-log" Jan 30 17:16:12 crc kubenswrapper[4875]: E0130 17:16:12.286207 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e576a578-9108-4a1d-b61d-e004ebac31d8" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.286213 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="e576a578-9108-4a1d-b61d-e004ebac31d8" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:16:12 crc kubenswrapper[4875]: E0130 17:16:12.286228 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d7c084-3d5d-4561-ab0d-762245e20fd8" containerName="nova-kuttl-metadata-metadata" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.286234 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d7c084-3d5d-4561-ab0d-762245e20fd8" containerName="nova-kuttl-metadata-metadata" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.286379 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d7c084-3d5d-4561-ab0d-762245e20fd8" containerName="nova-kuttl-metadata-log" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.286391 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="e576a578-9108-4a1d-b61d-e004ebac31d8" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.286405 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d7c084-3d5d-4561-ab0d-762245e20fd8" containerName="nova-kuttl-metadata-metadata" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.287158 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.289129 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.301695 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.302716 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.307297 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.307925 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.316325 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.320550 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.431335 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdcjx\" (UniqueName: \"kubernetes.io/projected/feb34589-d4ad-4995-afe2-d3181b3c5039-kube-api-access-kdcjx\") pod \"feb34589-d4ad-4995-afe2-d3181b3c5039\" (UID: \"feb34589-d4ad-4995-afe2-d3181b3c5039\") " Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.431399 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb34589-d4ad-4995-afe2-d3181b3c5039-config-data\") pod \"feb34589-d4ad-4995-afe2-d3181b3c5039\" (UID: \"feb34589-d4ad-4995-afe2-d3181b3c5039\") " Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.431518 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/feb34589-d4ad-4995-afe2-d3181b3c5039-logs\") pod \"feb34589-d4ad-4995-afe2-d3181b3c5039\" (UID: \"feb34589-d4ad-4995-afe2-d3181b3c5039\") " Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.431749 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8x9t\" (UniqueName: \"kubernetes.io/projected/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-kube-api-access-b8x9t\") pod \"nova-kuttl-metadata-0\" (UID: \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.431814 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.431841 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5hrg\" (UniqueName: \"kubernetes.io/projected/a811b4cd-61f3-4833-9be7-bf20dd90986e-kube-api-access-h5hrg\") pod \"nova-kuttl-scheduler-0\" (UID: \"a811b4cd-61f3-4833-9be7-bf20dd90986e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.431862 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a811b4cd-61f3-4833-9be7-bf20dd90986e-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"a811b4cd-61f3-4833-9be7-bf20dd90986e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.432117 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feb34589-d4ad-4995-afe2-d3181b3c5039-logs" (OuterVolumeSpecName: "logs") pod "feb34589-d4ad-4995-afe2-d3181b3c5039" (UID: "feb34589-d4ad-4995-afe2-d3181b3c5039"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.432236 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.432380 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/feb34589-d4ad-4995-afe2-d3181b3c5039-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.434783 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb34589-d4ad-4995-afe2-d3181b3c5039-kube-api-access-kdcjx" (OuterVolumeSpecName: "kube-api-access-kdcjx") pod "feb34589-d4ad-4995-afe2-d3181b3c5039" (UID: "feb34589-d4ad-4995-afe2-d3181b3c5039"). InnerVolumeSpecName "kube-api-access-kdcjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.451216 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb34589-d4ad-4995-afe2-d3181b3c5039-config-data" (OuterVolumeSpecName: "config-data") pod "feb34589-d4ad-4995-afe2-d3181b3c5039" (UID: "feb34589-d4ad-4995-afe2-d3181b3c5039"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.533193 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.533260 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8x9t\" (UniqueName: \"kubernetes.io/projected/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-kube-api-access-b8x9t\") pod \"nova-kuttl-metadata-0\" (UID: \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.533307 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.533327 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5hrg\" (UniqueName: \"kubernetes.io/projected/a811b4cd-61f3-4833-9be7-bf20dd90986e-kube-api-access-h5hrg\") pod \"nova-kuttl-scheduler-0\" (UID: \"a811b4cd-61f3-4833-9be7-bf20dd90986e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.533348 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a811b4cd-61f3-4833-9be7-bf20dd90986e-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"a811b4cd-61f3-4833-9be7-bf20dd90986e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.533442 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdcjx\" (UniqueName: \"kubernetes.io/projected/feb34589-d4ad-4995-afe2-d3181b3c5039-kube-api-access-kdcjx\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.533455 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb34589-d4ad-4995-afe2-d3181b3c5039-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.533990 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.536799 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a811b4cd-61f3-4833-9be7-bf20dd90986e-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"a811b4cd-61f3-4833-9be7-bf20dd90986e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.537671 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.548299 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8x9t\" (UniqueName: \"kubernetes.io/projected/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-kube-api-access-b8x9t\") pod \"nova-kuttl-metadata-0\" (UID: \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.548305 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5hrg\" (UniqueName: \"kubernetes.io/projected/a811b4cd-61f3-4833-9be7-bf20dd90986e-kube-api-access-h5hrg\") pod \"nova-kuttl-scheduler-0\" (UID: \"a811b4cd-61f3-4833-9be7-bf20dd90986e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.644708 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:12 crc kubenswrapper[4875]: I0130 17:16:12.652876 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.071164 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:16:13 crc kubenswrapper[4875]: W0130 17:16:13.076422 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6778aba5_b0e3_4d9b_98e0_7e13960b79b7.slice/crio-941d7e3f566a54192b9d4d89a8147e8031c915d659cf50ce44c015598ca6f57a WatchSource:0}: Error finding container 941d7e3f566a54192b9d4d89a8147e8031c915d659cf50ce44c015598ca6f57a: Status 404 returned error can't find the container with id 941d7e3f566a54192b9d4d89a8147e8031c915d659cf50ce44c015598ca6f57a Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.163401 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:16:13 crc kubenswrapper[4875]: W0130 17:16:13.174114 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda811b4cd_61f3_4833_9be7_bf20dd90986e.slice/crio-ea697857d7710c030b7b35a9c4b9818b84b63f228b8700d1f5bfcee258bf15f0 WatchSource:0}: Error finding container ea697857d7710c030b7b35a9c4b9818b84b63f228b8700d1f5bfcee258bf15f0: Status 404 returned error can't find the container with id ea697857d7710c030b7b35a9c4b9818b84b63f228b8700d1f5bfcee258bf15f0 Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.187193 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6778aba5-b0e3-4d9b-98e0-7e13960b79b7","Type":"ContainerStarted","Data":"941d7e3f566a54192b9d4d89a8147e8031c915d659cf50ce44c015598ca6f57a"} Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.192030 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"feb34589-d4ad-4995-afe2-d3181b3c5039","Type":"ContainerDied","Data":"36ecfa9b0b4c553ec8936523239b4e98775bcb3799368c9a6766af0b36c3caa2"} Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.192071 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.192089 4875 scope.go:117] "RemoveContainer" containerID="07ba2f3cbebfc4161fdc5776e497f5046353a5fa6fb9819890a6b7946964317a" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.268094 4875 scope.go:117] "RemoveContainer" containerID="1e83de1b7f61f07599ccbaf976bcbc5fd24f2a9880b8f85bdd6022887f1fe9a7" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.319909 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.355049 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.363704 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:16:13 crc kubenswrapper[4875]: E0130 17:16:13.363992 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb34589-d4ad-4995-afe2-d3181b3c5039" containerName="nova-kuttl-api-log" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.364009 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb34589-d4ad-4995-afe2-d3181b3c5039" containerName="nova-kuttl-api-log" Jan 30 17:16:13 crc kubenswrapper[4875]: E0130 17:16:13.364045 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb34589-d4ad-4995-afe2-d3181b3c5039" containerName="nova-kuttl-api-api" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.364052 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb34589-d4ad-4995-afe2-d3181b3c5039" containerName="nova-kuttl-api-api" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.364187 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb34589-d4ad-4995-afe2-d3181b3c5039" containerName="nova-kuttl-api-api" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.364208 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb34589-d4ad-4995-afe2-d3181b3c5039" containerName="nova-kuttl-api-log" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.364985 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.368811 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.373559 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.555735 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt8w7\" (UniqueName: \"kubernetes.io/projected/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-kube-api-access-pt8w7\") pod \"nova-kuttl-api-0\" (UID: \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.555829 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-logs\") pod \"nova-kuttl-api-0\" (UID: \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.555852 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-config-data\") pod \"nova-kuttl-api-0\" (UID: \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.657497 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-logs\") pod \"nova-kuttl-api-0\" (UID: \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.657546 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-config-data\") pod \"nova-kuttl-api-0\" (UID: \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.657681 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt8w7\" (UniqueName: \"kubernetes.io/projected/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-kube-api-access-pt8w7\") pod \"nova-kuttl-api-0\" (UID: \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.658304 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-logs\") pod \"nova-kuttl-api-0\" (UID: \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.661244 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-config-data\") pod \"nova-kuttl-api-0\" (UID: \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.672675 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt8w7\" (UniqueName: \"kubernetes.io/projected/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-kube-api-access-pt8w7\") pod \"nova-kuttl-api-0\" (UID: \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:13 crc kubenswrapper[4875]: I0130 17:16:13.678904 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:14 crc kubenswrapper[4875]: I0130 17:16:14.086396 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:16:14 crc kubenswrapper[4875]: I0130 17:16:14.146010 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28d7c084-3d5d-4561-ab0d-762245e20fd8" path="/var/lib/kubelet/pods/28d7c084-3d5d-4561-ab0d-762245e20fd8/volumes" Jan 30 17:16:14 crc kubenswrapper[4875]: I0130 17:16:14.146797 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e576a578-9108-4a1d-b61d-e004ebac31d8" path="/var/lib/kubelet/pods/e576a578-9108-4a1d-b61d-e004ebac31d8/volumes" Jan 30 17:16:14 crc kubenswrapper[4875]: I0130 17:16:14.147339 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb34589-d4ad-4995-afe2-d3181b3c5039" path="/var/lib/kubelet/pods/feb34589-d4ad-4995-afe2-d3181b3c5039/volumes" Jan 30 17:16:14 crc kubenswrapper[4875]: I0130 17:16:14.203260 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6778aba5-b0e3-4d9b-98e0-7e13960b79b7","Type":"ContainerStarted","Data":"c5c9720f840aa4393937e4824b77c350e4c78cc42a2db437f1d271eca4a05dc8"} Jan 30 17:16:14 crc kubenswrapper[4875]: I0130 17:16:14.203303 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6778aba5-b0e3-4d9b-98e0-7e13960b79b7","Type":"ContainerStarted","Data":"2f5bc42a1ed9a2645d90f47f78f7e7f2de49b833f1a614545b9764fbaa900d4d"} Jan 30 17:16:14 crc kubenswrapper[4875]: I0130 17:16:14.205134 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c","Type":"ContainerStarted","Data":"d69d609d5dd41eefc39bcd1a181750189d16eda3c39c8812249d03deabc28cb1"} Jan 30 17:16:14 crc kubenswrapper[4875]: I0130 17:16:14.206953 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a811b4cd-61f3-4833-9be7-bf20dd90986e","Type":"ContainerStarted","Data":"11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f"} Jan 30 17:16:14 crc kubenswrapper[4875]: I0130 17:16:14.207010 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a811b4cd-61f3-4833-9be7-bf20dd90986e","Type":"ContainerStarted","Data":"ea697857d7710c030b7b35a9c4b9818b84b63f228b8700d1f5bfcee258bf15f0"} Jan 30 17:16:14 crc kubenswrapper[4875]: I0130 17:16:14.230831 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.230810348 podStartE2EDuration="2.230810348s" podCreationTimestamp="2026-01-30 17:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:14.223503476 +0000 UTC m=+1184.770866869" watchObservedRunningTime="2026-01-30 17:16:14.230810348 +0000 UTC m=+1184.778173731" Jan 30 17:16:14 crc kubenswrapper[4875]: I0130 17:16:14.249183 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.249156533 podStartE2EDuration="2.249156533s" podCreationTimestamp="2026-01-30 17:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:14.248727878 +0000 UTC m=+1184.796091261" watchObservedRunningTime="2026-01-30 17:16:14.249156533 +0000 UTC m=+1184.796519916" Jan 30 17:16:15 crc kubenswrapper[4875]: I0130 17:16:15.216591 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c","Type":"ContainerStarted","Data":"998700a07409dfda4b016ea6a501cee792de4c19e000b9c34a35012dccb9d8d3"} Jan 30 17:16:15 crc kubenswrapper[4875]: I0130 17:16:15.216889 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c","Type":"ContainerStarted","Data":"d483fa59529db73c2f3f6061a376fa43f4d43d4ffcaadf435f0dff60387da7d2"} Jan 30 17:16:16 crc kubenswrapper[4875]: I0130 17:16:16.587381 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:16:16 crc kubenswrapper[4875]: I0130 17:16:16.607708 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=3.607684037 podStartE2EDuration="3.607684037s" podCreationTimestamp="2026-01-30 17:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:15.234339596 +0000 UTC m=+1185.781702979" watchObservedRunningTime="2026-01-30 17:16:16.607684037 +0000 UTC m=+1187.155047460" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.007722 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686"] Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.008688 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.010927 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.018257 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686"] Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.018378 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.113235 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4068c0c1-1588-401e-b1c2-597bfb06913a-config-data\") pod \"nova-kuttl-cell1-cell-mapping-jm686\" (UID: \"4068c0c1-1588-401e-b1c2-597bfb06913a\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.113311 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdlwj\" (UniqueName: \"kubernetes.io/projected/4068c0c1-1588-401e-b1c2-597bfb06913a-kube-api-access-bdlwj\") pod \"nova-kuttl-cell1-cell-mapping-jm686\" (UID: \"4068c0c1-1588-401e-b1c2-597bfb06913a\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.113436 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4068c0c1-1588-401e-b1c2-597bfb06913a-scripts\") pod \"nova-kuttl-cell1-cell-mapping-jm686\" (UID: \"4068c0c1-1588-401e-b1c2-597bfb06913a\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.215564 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4068c0c1-1588-401e-b1c2-597bfb06913a-scripts\") pod \"nova-kuttl-cell1-cell-mapping-jm686\" (UID: \"4068c0c1-1588-401e-b1c2-597bfb06913a\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.215690 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4068c0c1-1588-401e-b1c2-597bfb06913a-config-data\") pod \"nova-kuttl-cell1-cell-mapping-jm686\" (UID: \"4068c0c1-1588-401e-b1c2-597bfb06913a\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.215736 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdlwj\" (UniqueName: \"kubernetes.io/projected/4068c0c1-1588-401e-b1c2-597bfb06913a-kube-api-access-bdlwj\") pod \"nova-kuttl-cell1-cell-mapping-jm686\" (UID: \"4068c0c1-1588-401e-b1c2-597bfb06913a\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.220738 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4068c0c1-1588-401e-b1c2-597bfb06913a-scripts\") pod \"nova-kuttl-cell1-cell-mapping-jm686\" (UID: \"4068c0c1-1588-401e-b1c2-597bfb06913a\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.223371 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4068c0c1-1588-401e-b1c2-597bfb06913a-config-data\") pod \"nova-kuttl-cell1-cell-mapping-jm686\" (UID: \"4068c0c1-1588-401e-b1c2-597bfb06913a\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.238136 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdlwj\" (UniqueName: \"kubernetes.io/projected/4068c0c1-1588-401e-b1c2-597bfb06913a-kube-api-access-bdlwj\") pod \"nova-kuttl-cell1-cell-mapping-jm686\" (UID: \"4068c0c1-1588-401e-b1c2-597bfb06913a\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.326806 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.645073 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.645725 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.653324 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:17 crc kubenswrapper[4875]: I0130 17:16:17.842554 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686"] Jan 30 17:16:18 crc kubenswrapper[4875]: I0130 17:16:18.245040 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" event={"ID":"4068c0c1-1588-401e-b1c2-597bfb06913a","Type":"ContainerStarted","Data":"2f0250f16c14f44d852ea9668f4d0c6a912d5207ea16bf8ff8f6716a22f8de15"} Jan 30 17:16:18 crc kubenswrapper[4875]: I0130 17:16:18.245091 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" event={"ID":"4068c0c1-1588-401e-b1c2-597bfb06913a","Type":"ContainerStarted","Data":"dc14281640823ca18504ccb1401036ad06ee2a72ac63ed6e9fc3c045f20267f4"} Jan 30 17:16:18 crc kubenswrapper[4875]: I0130 17:16:18.264794 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" podStartSLOduration=2.264775146 podStartE2EDuration="2.264775146s" podCreationTimestamp="2026-01-30 17:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:18.262612701 +0000 UTC m=+1188.809976084" watchObservedRunningTime="2026-01-30 17:16:18.264775146 +0000 UTC m=+1188.812138529" Jan 30 17:16:22 crc kubenswrapper[4875]: I0130 17:16:22.645366 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:22 crc kubenswrapper[4875]: I0130 17:16:22.645941 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:22 crc kubenswrapper[4875]: I0130 17:16:22.653681 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:22 crc kubenswrapper[4875]: I0130 17:16:22.694330 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:23 crc kubenswrapper[4875]: I0130 17:16:23.302335 4875 generic.go:334] "Generic (PLEG): container finished" podID="4068c0c1-1588-401e-b1c2-597bfb06913a" containerID="2f0250f16c14f44d852ea9668f4d0c6a912d5207ea16bf8ff8f6716a22f8de15" exitCode=0 Jan 30 17:16:23 crc kubenswrapper[4875]: I0130 17:16:23.302421 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" event={"ID":"4068c0c1-1588-401e-b1c2-597bfb06913a","Type":"ContainerDied","Data":"2f0250f16c14f44d852ea9668f4d0c6a912d5207ea16bf8ff8f6716a22f8de15"} Jan 30 17:16:23 crc kubenswrapper[4875]: I0130 17:16:23.340188 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:23 crc kubenswrapper[4875]: I0130 17:16:23.679248 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:23 crc kubenswrapper[4875]: I0130 17:16:23.680238 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:23 crc kubenswrapper[4875]: I0130 17:16:23.727883 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6778aba5-b0e3-4d9b-98e0-7e13960b79b7" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.136:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:16:23 crc kubenswrapper[4875]: I0130 17:16:23.728050 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6778aba5-b0e3-4d9b-98e0-7e13960b79b7" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.136:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:16:24 crc kubenswrapper[4875]: I0130 17:16:24.592113 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" Jan 30 17:16:24 crc kubenswrapper[4875]: I0130 17:16:24.720765 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:16:24 crc kubenswrapper[4875]: I0130 17:16:24.762009 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4068c0c1-1588-401e-b1c2-597bfb06913a-scripts\") pod \"4068c0c1-1588-401e-b1c2-597bfb06913a\" (UID: \"4068c0c1-1588-401e-b1c2-597bfb06913a\") " Jan 30 17:16:24 crc kubenswrapper[4875]: I0130 17:16:24.762316 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdlwj\" (UniqueName: \"kubernetes.io/projected/4068c0c1-1588-401e-b1c2-597bfb06913a-kube-api-access-bdlwj\") pod \"4068c0c1-1588-401e-b1c2-597bfb06913a\" (UID: \"4068c0c1-1588-401e-b1c2-597bfb06913a\") " Jan 30 17:16:24 crc kubenswrapper[4875]: I0130 17:16:24.762414 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4068c0c1-1588-401e-b1c2-597bfb06913a-config-data\") pod \"4068c0c1-1588-401e-b1c2-597bfb06913a\" (UID: \"4068c0c1-1588-401e-b1c2-597bfb06913a\") " Jan 30 17:16:24 crc kubenswrapper[4875]: I0130 17:16:24.762888 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:16:24 crc kubenswrapper[4875]: I0130 17:16:24.769461 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4068c0c1-1588-401e-b1c2-597bfb06913a-scripts" (OuterVolumeSpecName: "scripts") pod "4068c0c1-1588-401e-b1c2-597bfb06913a" (UID: "4068c0c1-1588-401e-b1c2-597bfb06913a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:24 crc kubenswrapper[4875]: I0130 17:16:24.783312 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4068c0c1-1588-401e-b1c2-597bfb06913a-kube-api-access-bdlwj" (OuterVolumeSpecName: "kube-api-access-bdlwj") pod "4068c0c1-1588-401e-b1c2-597bfb06913a" (UID: "4068c0c1-1588-401e-b1c2-597bfb06913a"). InnerVolumeSpecName "kube-api-access-bdlwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:24 crc kubenswrapper[4875]: I0130 17:16:24.788311 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4068c0c1-1588-401e-b1c2-597bfb06913a-config-data" (OuterVolumeSpecName: "config-data") pod "4068c0c1-1588-401e-b1c2-597bfb06913a" (UID: "4068c0c1-1588-401e-b1c2-597bfb06913a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:24 crc kubenswrapper[4875]: I0130 17:16:24.864196 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdlwj\" (UniqueName: \"kubernetes.io/projected/4068c0c1-1588-401e-b1c2-597bfb06913a-kube-api-access-bdlwj\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:24 crc kubenswrapper[4875]: I0130 17:16:24.864255 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4068c0c1-1588-401e-b1c2-597bfb06913a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:24 crc kubenswrapper[4875]: I0130 17:16:24.864264 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4068c0c1-1588-401e-b1c2-597bfb06913a-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:25 crc kubenswrapper[4875]: I0130 17:16:25.319141 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" event={"ID":"4068c0c1-1588-401e-b1c2-597bfb06913a","Type":"ContainerDied","Data":"dc14281640823ca18504ccb1401036ad06ee2a72ac63ed6e9fc3c045f20267f4"} Jan 30 17:16:25 crc kubenswrapper[4875]: I0130 17:16:25.319203 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc14281640823ca18504ccb1401036ad06ee2a72ac63ed6e9fc3c045f20267f4" Jan 30 17:16:25 crc kubenswrapper[4875]: I0130 17:16:25.319211 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686" Jan 30 17:16:25 crc kubenswrapper[4875]: I0130 17:16:25.448334 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:16:25 crc kubenswrapper[4875]: I0130 17:16:25.449211 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" containerName="nova-kuttl-api-api" containerID="cri-o://998700a07409dfda4b016ea6a501cee792de4c19e000b9c34a35012dccb9d8d3" gracePeriod=30 Jan 30 17:16:25 crc kubenswrapper[4875]: I0130 17:16:25.450010 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" containerName="nova-kuttl-api-log" containerID="cri-o://d483fa59529db73c2f3f6061a376fa43f4d43d4ffcaadf435f0dff60387da7d2" gracePeriod=30 Jan 30 17:16:25 crc kubenswrapper[4875]: I0130 17:16:25.485164 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:16:25 crc kubenswrapper[4875]: I0130 17:16:25.485688 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="a811b4cd-61f3-4833-9be7-bf20dd90986e" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f" gracePeriod=30 Jan 30 17:16:25 crc kubenswrapper[4875]: I0130 17:16:25.530143 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:16:25 crc kubenswrapper[4875]: I0130 17:16:25.530796 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6778aba5-b0e3-4d9b-98e0-7e13960b79b7" containerName="nova-kuttl-metadata-log" containerID="cri-o://2f5bc42a1ed9a2645d90f47f78f7e7f2de49b833f1a614545b9764fbaa900d4d" gracePeriod=30 Jan 30 17:16:25 crc kubenswrapper[4875]: I0130 17:16:25.531457 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6778aba5-b0e3-4d9b-98e0-7e13960b79b7" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://c5c9720f840aa4393937e4824b77c350e4c78cc42a2db437f1d271eca4a05dc8" gracePeriod=30 Jan 30 17:16:26 crc kubenswrapper[4875]: I0130 17:16:26.330275 4875 generic.go:334] "Generic (PLEG): container finished" podID="6778aba5-b0e3-4d9b-98e0-7e13960b79b7" containerID="2f5bc42a1ed9a2645d90f47f78f7e7f2de49b833f1a614545b9764fbaa900d4d" exitCode=143 Jan 30 17:16:26 crc kubenswrapper[4875]: I0130 17:16:26.330300 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6778aba5-b0e3-4d9b-98e0-7e13960b79b7","Type":"ContainerDied","Data":"2f5bc42a1ed9a2645d90f47f78f7e7f2de49b833f1a614545b9764fbaa900d4d"} Jan 30 17:16:26 crc kubenswrapper[4875]: I0130 17:16:26.333303 4875 generic.go:334] "Generic (PLEG): container finished" podID="9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" containerID="d483fa59529db73c2f3f6061a376fa43f4d43d4ffcaadf435f0dff60387da7d2" exitCode=143 Jan 30 17:16:26 crc kubenswrapper[4875]: I0130 17:16:26.333345 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c","Type":"ContainerDied","Data":"d483fa59529db73c2f3f6061a376fa43f4d43d4ffcaadf435f0dff60387da7d2"} Jan 30 17:16:27 crc kubenswrapper[4875]: E0130 17:16:27.655627 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:16:27 crc kubenswrapper[4875]: E0130 17:16:27.660290 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:16:27 crc kubenswrapper[4875]: E0130 17:16:27.662038 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:16:27 crc kubenswrapper[4875]: E0130 17:16:27.662130 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="a811b4cd-61f3-4833-9be7-bf20dd90986e" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.192592 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.358970 4875 generic.go:334] "Generic (PLEG): container finished" podID="6778aba5-b0e3-4d9b-98e0-7e13960b79b7" containerID="c5c9720f840aa4393937e4824b77c350e4c78cc42a2db437f1d271eca4a05dc8" exitCode=0 Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.359304 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6778aba5-b0e3-4d9b-98e0-7e13960b79b7","Type":"ContainerDied","Data":"c5c9720f840aa4393937e4824b77c350e4c78cc42a2db437f1d271eca4a05dc8"} Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.359330 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6778aba5-b0e3-4d9b-98e0-7e13960b79b7","Type":"ContainerDied","Data":"941d7e3f566a54192b9d4d89a8147e8031c915d659cf50ce44c015598ca6f57a"} Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.359346 4875 scope.go:117] "RemoveContainer" containerID="c5c9720f840aa4393937e4824b77c350e4c78cc42a2db437f1d271eca4a05dc8" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.359726 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.380833 4875 scope.go:117] "RemoveContainer" containerID="2f5bc42a1ed9a2645d90f47f78f7e7f2de49b833f1a614545b9764fbaa900d4d" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.386027 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-config-data\") pod \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\" (UID: \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\") " Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.386080 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-logs\") pod \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\" (UID: \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\") " Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.386126 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8x9t\" (UniqueName: \"kubernetes.io/projected/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-kube-api-access-b8x9t\") pod \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\" (UID: \"6778aba5-b0e3-4d9b-98e0-7e13960b79b7\") " Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.386948 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-logs" (OuterVolumeSpecName: "logs") pod "6778aba5-b0e3-4d9b-98e0-7e13960b79b7" (UID: "6778aba5-b0e3-4d9b-98e0-7e13960b79b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.392467 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-kube-api-access-b8x9t" (OuterVolumeSpecName: "kube-api-access-b8x9t") pod "6778aba5-b0e3-4d9b-98e0-7e13960b79b7" (UID: "6778aba5-b0e3-4d9b-98e0-7e13960b79b7"). InnerVolumeSpecName "kube-api-access-b8x9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.409718 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-config-data" (OuterVolumeSpecName: "config-data") pod "6778aba5-b0e3-4d9b-98e0-7e13960b79b7" (UID: "6778aba5-b0e3-4d9b-98e0-7e13960b79b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.414913 4875 scope.go:117] "RemoveContainer" containerID="c5c9720f840aa4393937e4824b77c350e4c78cc42a2db437f1d271eca4a05dc8" Jan 30 17:16:29 crc kubenswrapper[4875]: E0130 17:16:29.416271 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5c9720f840aa4393937e4824b77c350e4c78cc42a2db437f1d271eca4a05dc8\": container with ID starting with c5c9720f840aa4393937e4824b77c350e4c78cc42a2db437f1d271eca4a05dc8 not found: ID does not exist" containerID="c5c9720f840aa4393937e4824b77c350e4c78cc42a2db437f1d271eca4a05dc8" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.416308 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5c9720f840aa4393937e4824b77c350e4c78cc42a2db437f1d271eca4a05dc8"} err="failed to get container status \"c5c9720f840aa4393937e4824b77c350e4c78cc42a2db437f1d271eca4a05dc8\": rpc error: code = NotFound desc = could not find container \"c5c9720f840aa4393937e4824b77c350e4c78cc42a2db437f1d271eca4a05dc8\": container with ID starting with c5c9720f840aa4393937e4824b77c350e4c78cc42a2db437f1d271eca4a05dc8 not found: ID does not exist" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.416334 4875 scope.go:117] "RemoveContainer" containerID="2f5bc42a1ed9a2645d90f47f78f7e7f2de49b833f1a614545b9764fbaa900d4d" Jan 30 17:16:29 crc kubenswrapper[4875]: E0130 17:16:29.416645 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5bc42a1ed9a2645d90f47f78f7e7f2de49b833f1a614545b9764fbaa900d4d\": container with ID starting with 2f5bc42a1ed9a2645d90f47f78f7e7f2de49b833f1a614545b9764fbaa900d4d not found: ID does not exist" containerID="2f5bc42a1ed9a2645d90f47f78f7e7f2de49b833f1a614545b9764fbaa900d4d" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.416690 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5bc42a1ed9a2645d90f47f78f7e7f2de49b833f1a614545b9764fbaa900d4d"} err="failed to get container status \"2f5bc42a1ed9a2645d90f47f78f7e7f2de49b833f1a614545b9764fbaa900d4d\": rpc error: code = NotFound desc = could not find container \"2f5bc42a1ed9a2645d90f47f78f7e7f2de49b833f1a614545b9764fbaa900d4d\": container with ID starting with 2f5bc42a1ed9a2645d90f47f78f7e7f2de49b833f1a614545b9764fbaa900d4d not found: ID does not exist" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.487602 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.487640 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.487654 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8x9t\" (UniqueName: \"kubernetes.io/projected/6778aba5-b0e3-4d9b-98e0-7e13960b79b7-kube-api-access-b8x9t\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.692689 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.700374 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.718804 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:16:29 crc kubenswrapper[4875]: E0130 17:16:29.719363 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4068c0c1-1588-401e-b1c2-597bfb06913a" containerName="nova-manage" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.719380 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="4068c0c1-1588-401e-b1c2-597bfb06913a" containerName="nova-manage" Jan 30 17:16:29 crc kubenswrapper[4875]: E0130 17:16:29.719403 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6778aba5-b0e3-4d9b-98e0-7e13960b79b7" containerName="nova-kuttl-metadata-log" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.719410 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="6778aba5-b0e3-4d9b-98e0-7e13960b79b7" containerName="nova-kuttl-metadata-log" Jan 30 17:16:29 crc kubenswrapper[4875]: E0130 17:16:29.719430 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6778aba5-b0e3-4d9b-98e0-7e13960b79b7" containerName="nova-kuttl-metadata-metadata" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.719440 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="6778aba5-b0e3-4d9b-98e0-7e13960b79b7" containerName="nova-kuttl-metadata-metadata" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.719817 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="6778aba5-b0e3-4d9b-98e0-7e13960b79b7" containerName="nova-kuttl-metadata-metadata" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.719835 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="4068c0c1-1588-401e-b1c2-597bfb06913a" containerName="nova-manage" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.719850 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="6778aba5-b0e3-4d9b-98e0-7e13960b79b7" containerName="nova-kuttl-metadata-log" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.721235 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.725252 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.736027 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.737451 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.892854 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a811b4cd-61f3-4833-9be7-bf20dd90986e-config-data\") pod \"a811b4cd-61f3-4833-9be7-bf20dd90986e\" (UID: \"a811b4cd-61f3-4833-9be7-bf20dd90986e\") " Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.892925 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5hrg\" (UniqueName: \"kubernetes.io/projected/a811b4cd-61f3-4833-9be7-bf20dd90986e-kube-api-access-h5hrg\") pod \"a811b4cd-61f3-4833-9be7-bf20dd90986e\" (UID: \"a811b4cd-61f3-4833-9be7-bf20dd90986e\") " Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.893236 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c742a1d-1edc-4c32-bb30-87ab682c735f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5c742a1d-1edc-4c32-bb30-87ab682c735f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.893295 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c742a1d-1edc-4c32-bb30-87ab682c735f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5c742a1d-1edc-4c32-bb30-87ab682c735f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.893324 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hgnv\" (UniqueName: \"kubernetes.io/projected/5c742a1d-1edc-4c32-bb30-87ab682c735f-kube-api-access-6hgnv\") pod \"nova-kuttl-metadata-0\" (UID: \"5c742a1d-1edc-4c32-bb30-87ab682c735f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.897003 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a811b4cd-61f3-4833-9be7-bf20dd90986e-kube-api-access-h5hrg" (OuterVolumeSpecName: "kube-api-access-h5hrg") pod "a811b4cd-61f3-4833-9be7-bf20dd90986e" (UID: "a811b4cd-61f3-4833-9be7-bf20dd90986e"). InnerVolumeSpecName "kube-api-access-h5hrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.915573 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a811b4cd-61f3-4833-9be7-bf20dd90986e-config-data" (OuterVolumeSpecName: "config-data") pod "a811b4cd-61f3-4833-9be7-bf20dd90986e" (UID: "a811b4cd-61f3-4833-9be7-bf20dd90986e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.994647 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c742a1d-1edc-4c32-bb30-87ab682c735f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5c742a1d-1edc-4c32-bb30-87ab682c735f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.995003 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c742a1d-1edc-4c32-bb30-87ab682c735f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5c742a1d-1edc-4c32-bb30-87ab682c735f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.995035 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hgnv\" (UniqueName: \"kubernetes.io/projected/5c742a1d-1edc-4c32-bb30-87ab682c735f-kube-api-access-6hgnv\") pod \"nova-kuttl-metadata-0\" (UID: \"5c742a1d-1edc-4c32-bb30-87ab682c735f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.995158 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c742a1d-1edc-4c32-bb30-87ab682c735f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5c742a1d-1edc-4c32-bb30-87ab682c735f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.995907 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a811b4cd-61f3-4833-9be7-bf20dd90986e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.996161 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5hrg\" (UniqueName: \"kubernetes.io/projected/a811b4cd-61f3-4833-9be7-bf20dd90986e-kube-api-access-h5hrg\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:29 crc kubenswrapper[4875]: I0130 17:16:29.999163 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c742a1d-1edc-4c32-bb30-87ab682c735f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5c742a1d-1edc-4c32-bb30-87ab682c735f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.010631 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hgnv\" (UniqueName: \"kubernetes.io/projected/5c742a1d-1edc-4c32-bb30-87ab682c735f-kube-api-access-6hgnv\") pod \"nova-kuttl-metadata-0\" (UID: \"5c742a1d-1edc-4c32-bb30-87ab682c735f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.091209 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.132187 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.150262 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6778aba5-b0e3-4d9b-98e0-7e13960b79b7" path="/var/lib/kubelet/pods/6778aba5-b0e3-4d9b-98e0-7e13960b79b7/volumes" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.200370 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt8w7\" (UniqueName: \"kubernetes.io/projected/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-kube-api-access-pt8w7\") pod \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\" (UID: \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\") " Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.200443 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-config-data\") pod \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\" (UID: \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\") " Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.200476 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-logs\") pod \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\" (UID: \"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c\") " Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.201904 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-logs" (OuterVolumeSpecName: "logs") pod "9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" (UID: "9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.210477 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-kube-api-access-pt8w7" (OuterVolumeSpecName: "kube-api-access-pt8w7") pod "9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" (UID: "9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c"). InnerVolumeSpecName "kube-api-access-pt8w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.227712 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-config-data" (OuterVolumeSpecName: "config-data") pod "9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" (UID: "9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.303115 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt8w7\" (UniqueName: \"kubernetes.io/projected/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-kube-api-access-pt8w7\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.303147 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.303178 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.367176 4875 generic.go:334] "Generic (PLEG): container finished" podID="a811b4cd-61f3-4833-9be7-bf20dd90986e" containerID="11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f" exitCode=0 Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.367218 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.367241 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a811b4cd-61f3-4833-9be7-bf20dd90986e","Type":"ContainerDied","Data":"11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f"} Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.367271 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a811b4cd-61f3-4833-9be7-bf20dd90986e","Type":"ContainerDied","Data":"ea697857d7710c030b7b35a9c4b9818b84b63f228b8700d1f5bfcee258bf15f0"} Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.367289 4875 scope.go:117] "RemoveContainer" containerID="11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.375500 4875 generic.go:334] "Generic (PLEG): container finished" podID="9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" containerID="998700a07409dfda4b016ea6a501cee792de4c19e000b9c34a35012dccb9d8d3" exitCode=0 Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.375815 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.375562 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c","Type":"ContainerDied","Data":"998700a07409dfda4b016ea6a501cee792de4c19e000b9c34a35012dccb9d8d3"} Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.375913 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c","Type":"ContainerDied","Data":"d69d609d5dd41eefc39bcd1a181750189d16eda3c39c8812249d03deabc28cb1"} Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.390463 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.392127 4875 scope.go:117] "RemoveContainer" containerID="11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f" Jan 30 17:16:30 crc kubenswrapper[4875]: E0130 17:16:30.401743 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f\": container with ID starting with 11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f not found: ID does not exist" containerID="11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.401799 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f"} err="failed to get container status \"11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f\": rpc error: code = NotFound desc = could not find container \"11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f\": container with ID starting with 11cb67ff1f0e456aeab35742137835170a488b412b510ee1146bbab38258311f not found: ID does not exist" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.401833 4875 scope.go:117] "RemoveContainer" containerID="998700a07409dfda4b016ea6a501cee792de4c19e000b9c34a35012dccb9d8d3" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.411315 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.421786 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:16:30 crc kubenswrapper[4875]: E0130 17:16:30.422308 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a811b4cd-61f3-4833-9be7-bf20dd90986e" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.422341 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="a811b4cd-61f3-4833-9be7-bf20dd90986e" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:16:30 crc kubenswrapper[4875]: E0130 17:16:30.422393 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" containerName="nova-kuttl-api-api" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.422405 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" containerName="nova-kuttl-api-api" Jan 30 17:16:30 crc kubenswrapper[4875]: E0130 17:16:30.422422 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" containerName="nova-kuttl-api-log" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.422431 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" containerName="nova-kuttl-api-log" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.422689 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="a811b4cd-61f3-4833-9be7-bf20dd90986e" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.422729 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" containerName="nova-kuttl-api-log" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.422748 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" containerName="nova-kuttl-api-api" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.423528 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.426285 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.442884 4875 scope.go:117] "RemoveContainer" containerID="d483fa59529db73c2f3f6061a376fa43f4d43d4ffcaadf435f0dff60387da7d2" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.445598 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.462107 4875 scope.go:117] "RemoveContainer" containerID="998700a07409dfda4b016ea6a501cee792de4c19e000b9c34a35012dccb9d8d3" Jan 30 17:16:30 crc kubenswrapper[4875]: E0130 17:16:30.464091 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"998700a07409dfda4b016ea6a501cee792de4c19e000b9c34a35012dccb9d8d3\": container with ID starting with 998700a07409dfda4b016ea6a501cee792de4c19e000b9c34a35012dccb9d8d3 not found: ID does not exist" containerID="998700a07409dfda4b016ea6a501cee792de4c19e000b9c34a35012dccb9d8d3" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.464139 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"998700a07409dfda4b016ea6a501cee792de4c19e000b9c34a35012dccb9d8d3"} err="failed to get container status \"998700a07409dfda4b016ea6a501cee792de4c19e000b9c34a35012dccb9d8d3\": rpc error: code = NotFound desc = could not find container \"998700a07409dfda4b016ea6a501cee792de4c19e000b9c34a35012dccb9d8d3\": container with ID starting with 998700a07409dfda4b016ea6a501cee792de4c19e000b9c34a35012dccb9d8d3 not found: ID does not exist" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.464174 4875 scope.go:117] "RemoveContainer" containerID="d483fa59529db73c2f3f6061a376fa43f4d43d4ffcaadf435f0dff60387da7d2" Jan 30 17:16:30 crc kubenswrapper[4875]: E0130 17:16:30.465088 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d483fa59529db73c2f3f6061a376fa43f4d43d4ffcaadf435f0dff60387da7d2\": container with ID starting with d483fa59529db73c2f3f6061a376fa43f4d43d4ffcaadf435f0dff60387da7d2 not found: ID does not exist" containerID="d483fa59529db73c2f3f6061a376fa43f4d43d4ffcaadf435f0dff60387da7d2" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.465115 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d483fa59529db73c2f3f6061a376fa43f4d43d4ffcaadf435f0dff60387da7d2"} err="failed to get container status \"d483fa59529db73c2f3f6061a376fa43f4d43d4ffcaadf435f0dff60387da7d2\": rpc error: code = NotFound desc = could not find container \"d483fa59529db73c2f3f6061a376fa43f4d43d4ffcaadf435f0dff60387da7d2\": container with ID starting with d483fa59529db73c2f3f6061a376fa43f4d43d4ffcaadf435f0dff60387da7d2 not found: ID does not exist" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.466293 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.481329 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.495079 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.496498 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.499159 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.511868 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.539967 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:16:30 crc kubenswrapper[4875]: W0130 17:16:30.541226 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c742a1d_1edc_4c32_bb30_87ab682c735f.slice/crio-98e4d4cb84e739ada7c25e5793f00314dd30dcdc17cf22557aeaa63444bda65d WatchSource:0}: Error finding container 98e4d4cb84e739ada7c25e5793f00314dd30dcdc17cf22557aeaa63444bda65d: Status 404 returned error can't find the container with id 98e4d4cb84e739ada7c25e5793f00314dd30dcdc17cf22557aeaa63444bda65d Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.606555 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgk9c\" (UniqueName: \"kubernetes.io/projected/9cc68f17-43a8-4027-9a59-481aeb6771d5-kube-api-access-pgk9c\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cc68f17-43a8-4027-9a59-481aeb6771d5\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.606669 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.606702 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfrrw\" (UniqueName: \"kubernetes.io/projected/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-kube-api-access-sfrrw\") pod \"nova-kuttl-api-0\" (UID: \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.606732 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cc68f17-43a8-4027-9a59-481aeb6771d5-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cc68f17-43a8-4027-9a59-481aeb6771d5\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.606748 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-logs\") pod \"nova-kuttl-api-0\" (UID: \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.708492 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgk9c\" (UniqueName: \"kubernetes.io/projected/9cc68f17-43a8-4027-9a59-481aeb6771d5-kube-api-access-pgk9c\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cc68f17-43a8-4027-9a59-481aeb6771d5\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.708642 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.708685 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfrrw\" (UniqueName: \"kubernetes.io/projected/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-kube-api-access-sfrrw\") pod \"nova-kuttl-api-0\" (UID: \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.708710 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cc68f17-43a8-4027-9a59-481aeb6771d5-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cc68f17-43a8-4027-9a59-481aeb6771d5\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.708729 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-logs\") pod \"nova-kuttl-api-0\" (UID: \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.709205 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-logs\") pod \"nova-kuttl-api-0\" (UID: \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.714278 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.717015 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cc68f17-43a8-4027-9a59-481aeb6771d5-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cc68f17-43a8-4027-9a59-481aeb6771d5\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.724905 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfrrw\" (UniqueName: \"kubernetes.io/projected/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-kube-api-access-sfrrw\") pod \"nova-kuttl-api-0\" (UID: \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.725648 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgk9c\" (UniqueName: \"kubernetes.io/projected/9cc68f17-43a8-4027-9a59-481aeb6771d5-kube-api-access-pgk9c\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cc68f17-43a8-4027-9a59-481aeb6771d5\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.746875 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:30 crc kubenswrapper[4875]: I0130 17:16:30.810306 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:31 crc kubenswrapper[4875]: I0130 17:16:31.181123 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:16:31 crc kubenswrapper[4875]: W0130 17:16:31.187289 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cc68f17_43a8_4027_9a59_481aeb6771d5.slice/crio-9563e4c166bb38cf30b1ecdecf2ee857b1a4590cce9053aec3c90aa391e21c49 WatchSource:0}: Error finding container 9563e4c166bb38cf30b1ecdecf2ee857b1a4590cce9053aec3c90aa391e21c49: Status 404 returned error can't find the container with id 9563e4c166bb38cf30b1ecdecf2ee857b1a4590cce9053aec3c90aa391e21c49 Jan 30 17:16:31 crc kubenswrapper[4875]: I0130 17:16:31.273094 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:16:31 crc kubenswrapper[4875]: W0130 17:16:31.282679 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfca688e9_e9fb_417b_9cfe_a56b5e098a3a.slice/crio-6e9d4a81200425c728fc18cb88b5326442861df24ba9589ed33580c94b17c254 WatchSource:0}: Error finding container 6e9d4a81200425c728fc18cb88b5326442861df24ba9589ed33580c94b17c254: Status 404 returned error can't find the container with id 6e9d4a81200425c728fc18cb88b5326442861df24ba9589ed33580c94b17c254 Jan 30 17:16:31 crc kubenswrapper[4875]: I0130 17:16:31.387568 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5c742a1d-1edc-4c32-bb30-87ab682c735f","Type":"ContainerStarted","Data":"a49b4f1596ddf10c5eae7d3d6a919e8738a416c4b842e4bce20a36c2e9d8d914"} Jan 30 17:16:31 crc kubenswrapper[4875]: I0130 17:16:31.387641 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5c742a1d-1edc-4c32-bb30-87ab682c735f","Type":"ContainerStarted","Data":"1fe32171492a3820cef455ef7bd58a02cc12843a0a6f2cbdc9172bdad8a2aa70"} Jan 30 17:16:31 crc kubenswrapper[4875]: I0130 17:16:31.387655 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5c742a1d-1edc-4c32-bb30-87ab682c735f","Type":"ContainerStarted","Data":"98e4d4cb84e739ada7c25e5793f00314dd30dcdc17cf22557aeaa63444bda65d"} Jan 30 17:16:31 crc kubenswrapper[4875]: I0130 17:16:31.390782 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fca688e9-e9fb-417b-9cfe-a56b5e098a3a","Type":"ContainerStarted","Data":"6e9d4a81200425c728fc18cb88b5326442861df24ba9589ed33580c94b17c254"} Jan 30 17:16:31 crc kubenswrapper[4875]: I0130 17:16:31.392550 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"9cc68f17-43a8-4027-9a59-481aeb6771d5","Type":"ContainerStarted","Data":"c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11"} Jan 30 17:16:31 crc kubenswrapper[4875]: I0130 17:16:31.392573 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"9cc68f17-43a8-4027-9a59-481aeb6771d5","Type":"ContainerStarted","Data":"9563e4c166bb38cf30b1ecdecf2ee857b1a4590cce9053aec3c90aa391e21c49"} Jan 30 17:16:31 crc kubenswrapper[4875]: I0130 17:16:31.410348 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.410322 podStartE2EDuration="2.410322s" podCreationTimestamp="2026-01-30 17:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:31.404004421 +0000 UTC m=+1201.951367804" watchObservedRunningTime="2026-01-30 17:16:31.410322 +0000 UTC m=+1201.957685403" Jan 30 17:16:31 crc kubenswrapper[4875]: I0130 17:16:31.427076 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.427056468 podStartE2EDuration="1.427056468s" podCreationTimestamp="2026-01-30 17:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:31.418684319 +0000 UTC m=+1201.966047742" watchObservedRunningTime="2026-01-30 17:16:31.427056468 +0000 UTC m=+1201.974419851" Jan 30 17:16:32 crc kubenswrapper[4875]: I0130 17:16:32.161659 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c" path="/var/lib/kubelet/pods/9cabb3a5-b966-4c4e-a1ee-e93ba1efb33c/volumes" Jan 30 17:16:32 crc kubenswrapper[4875]: I0130 17:16:32.164296 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a811b4cd-61f3-4833-9be7-bf20dd90986e" path="/var/lib/kubelet/pods/a811b4cd-61f3-4833-9be7-bf20dd90986e/volumes" Jan 30 17:16:32 crc kubenswrapper[4875]: I0130 17:16:32.410278 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fca688e9-e9fb-417b-9cfe-a56b5e098a3a","Type":"ContainerStarted","Data":"5a50e6a06d272ea218fd0f9f392d93d34a993d89270d2421c68bbd4bd09dce59"} Jan 30 17:16:32 crc kubenswrapper[4875]: I0130 17:16:32.410338 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fca688e9-e9fb-417b-9cfe-a56b5e098a3a","Type":"ContainerStarted","Data":"0580e823465999a0142046e50d8b646ece086dea2742a0ba33d6b64ab8d7e5bf"} Jan 30 17:16:32 crc kubenswrapper[4875]: I0130 17:16:32.435580 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.435555268 podStartE2EDuration="2.435555268s" podCreationTimestamp="2026-01-30 17:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:32.426027888 +0000 UTC m=+1202.973391301" watchObservedRunningTime="2026-01-30 17:16:32.435555268 +0000 UTC m=+1202.982918691" Jan 30 17:16:35 crc kubenswrapper[4875]: I0130 17:16:35.092188 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:35 crc kubenswrapper[4875]: I0130 17:16:35.092692 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:35 crc kubenswrapper[4875]: I0130 17:16:35.747308 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:40 crc kubenswrapper[4875]: I0130 17:16:40.091763 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:40 crc kubenswrapper[4875]: I0130 17:16:40.092334 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:40 crc kubenswrapper[4875]: I0130 17:16:40.747682 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:40 crc kubenswrapper[4875]: I0130 17:16:40.773356 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:40 crc kubenswrapper[4875]: I0130 17:16:40.811325 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:40 crc kubenswrapper[4875]: I0130 17:16:40.811451 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:41 crc kubenswrapper[4875]: I0130 17:16:41.173860 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.140:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:16:41 crc kubenswrapper[4875]: I0130 17:16:41.173889 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.140:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:16:41 crc kubenswrapper[4875]: I0130 17:16:41.524981 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:16:41 crc kubenswrapper[4875]: I0130 17:16:41.892807 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fca688e9-e9fb-417b-9cfe-a56b5e098a3a" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.142:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:16:41 crc kubenswrapper[4875]: I0130 17:16:41.892794 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fca688e9-e9fb-417b-9cfe-a56b5e098a3a" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.142:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:16:50 crc kubenswrapper[4875]: I0130 17:16:50.094416 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:50 crc kubenswrapper[4875]: I0130 17:16:50.096007 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:50 crc kubenswrapper[4875]: I0130 17:16:50.097208 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:50 crc kubenswrapper[4875]: I0130 17:16:50.097930 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:16:50 crc kubenswrapper[4875]: I0130 17:16:50.815578 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:50 crc kubenswrapper[4875]: I0130 17:16:50.816723 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:50 crc kubenswrapper[4875]: I0130 17:16:50.818246 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:50 crc kubenswrapper[4875]: I0130 17:16:50.820949 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:51 crc kubenswrapper[4875]: I0130 17:16:51.577277 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:51 crc kubenswrapper[4875]: I0130 17:16:51.581524 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.295962 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq"] Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.297465 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.303933 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.304084 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.305752 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq"] Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.362387 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c47kk\" (UniqueName: \"kubernetes.io/projected/44a7e857-e4b7-491a-b003-ca6a71e3bc08-kube-api-access-c47kk\") pod \"nova-kuttl-cell1-cell-delete-lg6bq\" (UID: \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.362819 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44a7e857-e4b7-491a-b003-ca6a71e3bc08-scripts\") pod \"nova-kuttl-cell1-cell-delete-lg6bq\" (UID: \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.362845 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a7e857-e4b7-491a-b003-ca6a71e3bc08-config-data\") pod \"nova-kuttl-cell1-cell-delete-lg6bq\" (UID: \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.464108 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44a7e857-e4b7-491a-b003-ca6a71e3bc08-scripts\") pod \"nova-kuttl-cell1-cell-delete-lg6bq\" (UID: \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.464186 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a7e857-e4b7-491a-b003-ca6a71e3bc08-config-data\") pod \"nova-kuttl-cell1-cell-delete-lg6bq\" (UID: \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.464227 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c47kk\" (UniqueName: \"kubernetes.io/projected/44a7e857-e4b7-491a-b003-ca6a71e3bc08-kube-api-access-c47kk\") pod \"nova-kuttl-cell1-cell-delete-lg6bq\" (UID: \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.471541 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44a7e857-e4b7-491a-b003-ca6a71e3bc08-scripts\") pod \"nova-kuttl-cell1-cell-delete-lg6bq\" (UID: \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.471803 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a7e857-e4b7-491a-b003-ca6a71e3bc08-config-data\") pod \"nova-kuttl-cell1-cell-delete-lg6bq\" (UID: \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.480473 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c47kk\" (UniqueName: \"kubernetes.io/projected/44a7e857-e4b7-491a-b003-ca6a71e3bc08-kube-api-access-c47kk\") pod \"nova-kuttl-cell1-cell-delete-lg6bq\" (UID: \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" Jan 30 17:16:53 crc kubenswrapper[4875]: I0130 17:16:53.616352 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" Jan 30 17:16:54 crc kubenswrapper[4875]: W0130 17:16:54.029609 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44a7e857_e4b7_491a_b003_ca6a71e3bc08.slice/crio-f774cd7418da7a1385e70d8ab7febe832549156cb5105f6546f0fe13951e9840 WatchSource:0}: Error finding container f774cd7418da7a1385e70d8ab7febe832549156cb5105f6546f0fe13951e9840: Status 404 returned error can't find the container with id f774cd7418da7a1385e70d8ab7febe832549156cb5105f6546f0fe13951e9840 Jan 30 17:16:54 crc kubenswrapper[4875]: I0130 17:16:54.036211 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq"] Jan 30 17:16:54 crc kubenswrapper[4875]: I0130 17:16:54.597568 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" event={"ID":"44a7e857-e4b7-491a-b003-ca6a71e3bc08","Type":"ContainerStarted","Data":"5fd1203d63452140b67d813d8ae19a230a52832b27f87a8f7c01d200ca8bfee3"} Jan 30 17:16:54 crc kubenswrapper[4875]: I0130 17:16:54.597647 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" event={"ID":"44a7e857-e4b7-491a-b003-ca6a71e3bc08","Type":"ContainerStarted","Data":"f774cd7418da7a1385e70d8ab7febe832549156cb5105f6546f0fe13951e9840"} Jan 30 17:16:54 crc kubenswrapper[4875]: I0130 17:16:54.616767 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" podStartSLOduration=1.616750205 podStartE2EDuration="1.616750205s" podCreationTimestamp="2026-01-30 17:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:54.613696 +0000 UTC m=+1225.161059383" watchObservedRunningTime="2026-01-30 17:16:54.616750205 +0000 UTC m=+1225.164113588" Jan 30 17:16:59 crc kubenswrapper[4875]: I0130 17:16:59.639736 4875 generic.go:334] "Generic (PLEG): container finished" podID="44a7e857-e4b7-491a-b003-ca6a71e3bc08" containerID="5fd1203d63452140b67d813d8ae19a230a52832b27f87a8f7c01d200ca8bfee3" exitCode=0 Jan 30 17:16:59 crc kubenswrapper[4875]: I0130 17:16:59.639840 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" event={"ID":"44a7e857-e4b7-491a-b003-ca6a71e3bc08","Type":"ContainerDied","Data":"5fd1203d63452140b67d813d8ae19a230a52832b27f87a8f7c01d200ca8bfee3"} Jan 30 17:17:00 crc kubenswrapper[4875]: I0130 17:17:00.963093 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" Jan 30 17:17:01 crc kubenswrapper[4875]: I0130 17:17:01.088915 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a7e857-e4b7-491a-b003-ca6a71e3bc08-config-data\") pod \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\" (UID: \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\") " Jan 30 17:17:01 crc kubenswrapper[4875]: I0130 17:17:01.088993 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44a7e857-e4b7-491a-b003-ca6a71e3bc08-scripts\") pod \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\" (UID: \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\") " Jan 30 17:17:01 crc kubenswrapper[4875]: I0130 17:17:01.089071 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c47kk\" (UniqueName: \"kubernetes.io/projected/44a7e857-e4b7-491a-b003-ca6a71e3bc08-kube-api-access-c47kk\") pod \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\" (UID: \"44a7e857-e4b7-491a-b003-ca6a71e3bc08\") " Jan 30 17:17:01 crc kubenswrapper[4875]: I0130 17:17:01.095639 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a7e857-e4b7-491a-b003-ca6a71e3bc08-scripts" (OuterVolumeSpecName: "scripts") pod "44a7e857-e4b7-491a-b003-ca6a71e3bc08" (UID: "44a7e857-e4b7-491a-b003-ca6a71e3bc08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:01 crc kubenswrapper[4875]: I0130 17:17:01.096030 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a7e857-e4b7-491a-b003-ca6a71e3bc08-kube-api-access-c47kk" (OuterVolumeSpecName: "kube-api-access-c47kk") pod "44a7e857-e4b7-491a-b003-ca6a71e3bc08" (UID: "44a7e857-e4b7-491a-b003-ca6a71e3bc08"). InnerVolumeSpecName "kube-api-access-c47kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:01 crc kubenswrapper[4875]: I0130 17:17:01.115827 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a7e857-e4b7-491a-b003-ca6a71e3bc08-config-data" (OuterVolumeSpecName: "config-data") pod "44a7e857-e4b7-491a-b003-ca6a71e3bc08" (UID: "44a7e857-e4b7-491a-b003-ca6a71e3bc08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:01 crc kubenswrapper[4875]: I0130 17:17:01.191730 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a7e857-e4b7-491a-b003-ca6a71e3bc08-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:01 crc kubenswrapper[4875]: I0130 17:17:01.191772 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44a7e857-e4b7-491a-b003-ca6a71e3bc08-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:01 crc kubenswrapper[4875]: I0130 17:17:01.191788 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c47kk\" (UniqueName: \"kubernetes.io/projected/44a7e857-e4b7-491a-b003-ca6a71e3bc08-kube-api-access-c47kk\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:01 crc kubenswrapper[4875]: I0130 17:17:01.658302 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" event={"ID":"44a7e857-e4b7-491a-b003-ca6a71e3bc08","Type":"ContainerDied","Data":"f774cd7418da7a1385e70d8ab7febe832549156cb5105f6546f0fe13951e9840"} Jan 30 17:17:01 crc kubenswrapper[4875]: I0130 17:17:01.658378 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f774cd7418da7a1385e70d8ab7febe832549156cb5105f6546f0fe13951e9840" Jan 30 17:17:01 crc kubenswrapper[4875]: I0130 17:17:01.658486 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.003693 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc"] Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.012600 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.012821 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="43fc7a38-c949-4c28-8449-f23a5224cf13" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://005280cd9973d176e553373f86085b3bbe6de5fe194b3b8b97e602f815daf506" gracePeriod=30 Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.022449 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-w4dqc"] Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.030527 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell1cf36-account-delete-hxgmj"] Jan 30 17:17:02 crc kubenswrapper[4875]: E0130 17:17:02.033054 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a7e857-e4b7-491a-b003-ca6a71e3bc08" containerName="nova-manage" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.033082 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a7e857-e4b7-491a-b003-ca6a71e3bc08" containerName="nova-manage" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.033316 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a7e857-e4b7-491a-b003-ca6a71e3bc08" containerName="nova-manage" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.034179 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1cf36-account-delete-hxgmj" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.036615 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.036798 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="4bbb2c92-8124-49f3-b278-c77b4b0d8a52" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://7a7132bca0906b89335d0aa2d3779663174b8deed679ac2fa5adaf9404077c1d" gracePeriod=30 Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.052378 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1cf36-account-delete-hxgmj"] Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.109166 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5h6x\" (UniqueName: \"kubernetes.io/projected/82cd9ebf-f9ce-4e60-8cce-348dfada6f12-kube-api-access-t5h6x\") pod \"novacell1cf36-account-delete-hxgmj\" (UID: \"82cd9ebf-f9ce-4e60-8cce-348dfada6f12\") " pod="nova-kuttl-default/novacell1cf36-account-delete-hxgmj" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.109594 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82cd9ebf-f9ce-4e60-8cce-348dfada6f12-operator-scripts\") pod \"novacell1cf36-account-delete-hxgmj\" (UID: \"82cd9ebf-f9ce-4e60-8cce-348dfada6f12\") " pod="nova-kuttl-default/novacell1cf36-account-delete-hxgmj" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.146074 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e90ae8f-59a8-45bd-8d17-8f09cec682c3" path="/var/lib/kubelet/pods/2e90ae8f-59a8-45bd-8d17-8f09cec682c3/volumes" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.211198 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82cd9ebf-f9ce-4e60-8cce-348dfada6f12-operator-scripts\") pod \"novacell1cf36-account-delete-hxgmj\" (UID: \"82cd9ebf-f9ce-4e60-8cce-348dfada6f12\") " pod="nova-kuttl-default/novacell1cf36-account-delete-hxgmj" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.212664 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82cd9ebf-f9ce-4e60-8cce-348dfada6f12-operator-scripts\") pod \"novacell1cf36-account-delete-hxgmj\" (UID: \"82cd9ebf-f9ce-4e60-8cce-348dfada6f12\") " pod="nova-kuttl-default/novacell1cf36-account-delete-hxgmj" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.212973 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5h6x\" (UniqueName: \"kubernetes.io/projected/82cd9ebf-f9ce-4e60-8cce-348dfada6f12-kube-api-access-t5h6x\") pod \"novacell1cf36-account-delete-hxgmj\" (UID: \"82cd9ebf-f9ce-4e60-8cce-348dfada6f12\") " pod="nova-kuttl-default/novacell1cf36-account-delete-hxgmj" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.223520 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.223942 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fca688e9-e9fb-417b-9cfe-a56b5e098a3a" containerName="nova-kuttl-api-log" containerID="cri-o://0580e823465999a0142046e50d8b646ece086dea2742a0ba33d6b64ab8d7e5bf" gracePeriod=30 Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.224195 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fca688e9-e9fb-417b-9cfe-a56b5e098a3a" containerName="nova-kuttl-api-api" containerID="cri-o://5a50e6a06d272ea218fd0f9f392d93d34a993d89270d2421c68bbd4bd09dce59" gracePeriod=30 Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.238865 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5h6x\" (UniqueName: \"kubernetes.io/projected/82cd9ebf-f9ce-4e60-8cce-348dfada6f12-kube-api-access-t5h6x\") pod \"novacell1cf36-account-delete-hxgmj\" (UID: \"82cd9ebf-f9ce-4e60-8cce-348dfada6f12\") " pod="nova-kuttl-default/novacell1cf36-account-delete-hxgmj" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.307638 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.307899 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://a49b4f1596ddf10c5eae7d3d6a919e8738a416c4b842e4bce20a36c2e9d8d914" gracePeriod=30 Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.307857 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerName="nova-kuttl-metadata-log" containerID="cri-o://1fe32171492a3820cef455ef7bd58a02cc12843a0a6f2cbdc9172bdad8a2aa70" gracePeriod=30 Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.354016 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1cf36-account-delete-hxgmj" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.667288 4875 generic.go:334] "Generic (PLEG): container finished" podID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerID="1fe32171492a3820cef455ef7bd58a02cc12843a0a6f2cbdc9172bdad8a2aa70" exitCode=143 Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.667361 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5c742a1d-1edc-4c32-bb30-87ab682c735f","Type":"ContainerDied","Data":"1fe32171492a3820cef455ef7bd58a02cc12843a0a6f2cbdc9172bdad8a2aa70"} Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.669518 4875 generic.go:334] "Generic (PLEG): container finished" podID="4bbb2c92-8124-49f3-b278-c77b4b0d8a52" containerID="7a7132bca0906b89335d0aa2d3779663174b8deed679ac2fa5adaf9404077c1d" exitCode=0 Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.669624 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"4bbb2c92-8124-49f3-b278-c77b4b0d8a52","Type":"ContainerDied","Data":"7a7132bca0906b89335d0aa2d3779663174b8deed679ac2fa5adaf9404077c1d"} Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.671512 4875 generic.go:334] "Generic (PLEG): container finished" podID="fca688e9-e9fb-417b-9cfe-a56b5e098a3a" containerID="0580e823465999a0142046e50d8b646ece086dea2742a0ba33d6b64ab8d7e5bf" exitCode=143 Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.671544 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fca688e9-e9fb-417b-9cfe-a56b5e098a3a","Type":"ContainerDied","Data":"0580e823465999a0142046e50d8b646ece086dea2742a0ba33d6b64ab8d7e5bf"} Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.790640 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1cf36-account-delete-hxgmj"] Jan 30 17:17:02 crc kubenswrapper[4875]: W0130 17:17:02.791857 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82cd9ebf_f9ce_4e60_8cce_348dfada6f12.slice/crio-ebc6cc7560102739b39cf71d47cbaa9b2a159eed679ed6d57acc7a70842c1063 WatchSource:0}: Error finding container ebc6cc7560102739b39cf71d47cbaa9b2a159eed679ed6d57acc7a70842c1063: Status 404 returned error can't find the container with id ebc6cc7560102739b39cf71d47cbaa9b2a159eed679ed6d57acc7a70842c1063 Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.872607 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.927657 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghngw\" (UniqueName: \"kubernetes.io/projected/4bbb2c92-8124-49f3-b278-c77b4b0d8a52-kube-api-access-ghngw\") pod \"4bbb2c92-8124-49f3-b278-c77b4b0d8a52\" (UID: \"4bbb2c92-8124-49f3-b278-c77b4b0d8a52\") " Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.928190 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbb2c92-8124-49f3-b278-c77b4b0d8a52-config-data\") pod \"4bbb2c92-8124-49f3-b278-c77b4b0d8a52\" (UID: \"4bbb2c92-8124-49f3-b278-c77b4b0d8a52\") " Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.934439 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bbb2c92-8124-49f3-b278-c77b4b0d8a52-kube-api-access-ghngw" (OuterVolumeSpecName: "kube-api-access-ghngw") pod "4bbb2c92-8124-49f3-b278-c77b4b0d8a52" (UID: "4bbb2c92-8124-49f3-b278-c77b4b0d8a52"). InnerVolumeSpecName "kube-api-access-ghngw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:02 crc kubenswrapper[4875]: I0130 17:17:02.950440 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bbb2c92-8124-49f3-b278-c77b4b0d8a52-config-data" (OuterVolumeSpecName: "config-data") pod "4bbb2c92-8124-49f3-b278-c77b4b0d8a52" (UID: "4bbb2c92-8124-49f3-b278-c77b4b0d8a52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:03 crc kubenswrapper[4875]: I0130 17:17:03.031283 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghngw\" (UniqueName: \"kubernetes.io/projected/4bbb2c92-8124-49f3-b278-c77b4b0d8a52-kube-api-access-ghngw\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:03 crc kubenswrapper[4875]: I0130 17:17:03.031346 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bbb2c92-8124-49f3-b278-c77b4b0d8a52-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:03 crc kubenswrapper[4875]: I0130 17:17:03.686119 4875 generic.go:334] "Generic (PLEG): container finished" podID="82cd9ebf-f9ce-4e60-8cce-348dfada6f12" containerID="007475f3dbfa71584438843858509f0e6fbd8d04a4fecbca9d37b3b82b4eca40" exitCode=0 Jan 30 17:17:03 crc kubenswrapper[4875]: I0130 17:17:03.686281 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1cf36-account-delete-hxgmj" event={"ID":"82cd9ebf-f9ce-4e60-8cce-348dfada6f12","Type":"ContainerDied","Data":"007475f3dbfa71584438843858509f0e6fbd8d04a4fecbca9d37b3b82b4eca40"} Jan 30 17:17:03 crc kubenswrapper[4875]: I0130 17:17:03.686339 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1cf36-account-delete-hxgmj" event={"ID":"82cd9ebf-f9ce-4e60-8cce-348dfada6f12","Type":"ContainerStarted","Data":"ebc6cc7560102739b39cf71d47cbaa9b2a159eed679ed6d57acc7a70842c1063"} Jan 30 17:17:03 crc kubenswrapper[4875]: I0130 17:17:03.689874 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"4bbb2c92-8124-49f3-b278-c77b4b0d8a52","Type":"ContainerDied","Data":"b64fd953101cd86d44712244f67dc55d1adf41e4e4345c8c67d927e44aa1b819"} Jan 30 17:17:03 crc kubenswrapper[4875]: I0130 17:17:03.689937 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:17:03 crc kubenswrapper[4875]: I0130 17:17:03.689977 4875 scope.go:117] "RemoveContainer" containerID="7a7132bca0906b89335d0aa2d3779663174b8deed679ac2fa5adaf9404077c1d" Jan 30 17:17:03 crc kubenswrapper[4875]: I0130 17:17:03.745330 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:17:03 crc kubenswrapper[4875]: I0130 17:17:03.756191 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:17:04 crc kubenswrapper[4875]: I0130 17:17:04.154667 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bbb2c92-8124-49f3-b278-c77b4b0d8a52" path="/var/lib/kubelet/pods/4bbb2c92-8124-49f3-b278-c77b4b0d8a52/volumes" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.183240 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1cf36-account-delete-hxgmj" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.380058 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5h6x\" (UniqueName: \"kubernetes.io/projected/82cd9ebf-f9ce-4e60-8cce-348dfada6f12-kube-api-access-t5h6x\") pod \"82cd9ebf-f9ce-4e60-8cce-348dfada6f12\" (UID: \"82cd9ebf-f9ce-4e60-8cce-348dfada6f12\") " Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.380199 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82cd9ebf-f9ce-4e60-8cce-348dfada6f12-operator-scripts\") pod \"82cd9ebf-f9ce-4e60-8cce-348dfada6f12\" (UID: \"82cd9ebf-f9ce-4e60-8cce-348dfada6f12\") " Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.381089 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82cd9ebf-f9ce-4e60-8cce-348dfada6f12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82cd9ebf-f9ce-4e60-8cce-348dfada6f12" (UID: "82cd9ebf-f9ce-4e60-8cce-348dfada6f12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.388304 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82cd9ebf-f9ce-4e60-8cce-348dfada6f12-kube-api-access-t5h6x" (OuterVolumeSpecName: "kube-api-access-t5h6x") pod "82cd9ebf-f9ce-4e60-8cce-348dfada6f12" (UID: "82cd9ebf-f9ce-4e60-8cce-348dfada6f12"). InnerVolumeSpecName "kube-api-access-t5h6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.405721 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.405949 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="9cc68f17-43a8-4027-9a59-481aeb6771d5" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11" gracePeriod=30 Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.443447 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.140:8775/\": read tcp 10.217.0.2:52206->10.217.0.140:8775: read: connection reset by peer" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.443405 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.140:8775/\": read tcp 10.217.0.2:52198->10.217.0.140:8775: read: connection reset by peer" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.484654 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82cd9ebf-f9ce-4e60-8cce-348dfada6f12-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.484690 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5h6x\" (UniqueName: \"kubernetes.io/projected/82cd9ebf-f9ce-4e60-8cce-348dfada6f12-kube-api-access-t5h6x\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.601638 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.687501 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qzl7\" (UniqueName: \"kubernetes.io/projected/43fc7a38-c949-4c28-8449-f23a5224cf13-kube-api-access-5qzl7\") pod \"43fc7a38-c949-4c28-8449-f23a5224cf13\" (UID: \"43fc7a38-c949-4c28-8449-f23a5224cf13\") " Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.687785 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43fc7a38-c949-4c28-8449-f23a5224cf13-config-data\") pod \"43fc7a38-c949-4c28-8449-f23a5224cf13\" (UID: \"43fc7a38-c949-4c28-8449-f23a5224cf13\") " Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.692809 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43fc7a38-c949-4c28-8449-f23a5224cf13-kube-api-access-5qzl7" (OuterVolumeSpecName: "kube-api-access-5qzl7") pod "43fc7a38-c949-4c28-8449-f23a5224cf13" (UID: "43fc7a38-c949-4c28-8449-f23a5224cf13"). InnerVolumeSpecName "kube-api-access-5qzl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.708571 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.711303 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43fc7a38-c949-4c28-8449-f23a5224cf13-config-data" (OuterVolumeSpecName: "config-data") pod "43fc7a38-c949-4c28-8449-f23a5224cf13" (UID: "43fc7a38-c949-4c28-8449-f23a5224cf13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.738707 4875 generic.go:334] "Generic (PLEG): container finished" podID="fca688e9-e9fb-417b-9cfe-a56b5e098a3a" containerID="5a50e6a06d272ea218fd0f9f392d93d34a993d89270d2421c68bbd4bd09dce59" exitCode=0 Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.738826 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fca688e9-e9fb-417b-9cfe-a56b5e098a3a","Type":"ContainerDied","Data":"5a50e6a06d272ea218fd0f9f392d93d34a993d89270d2421c68bbd4bd09dce59"} Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.738846 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.738871 4875 scope.go:117] "RemoveContainer" containerID="5a50e6a06d272ea218fd0f9f392d93d34a993d89270d2421c68bbd4bd09dce59" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.738858 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fca688e9-e9fb-417b-9cfe-a56b5e098a3a","Type":"ContainerDied","Data":"6e9d4a81200425c728fc18cb88b5326442861df24ba9589ed33580c94b17c254"} Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.746477 4875 generic.go:334] "Generic (PLEG): container finished" podID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerID="a49b4f1596ddf10c5eae7d3d6a919e8738a416c4b842e4bce20a36c2e9d8d914" exitCode=0 Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.746540 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5c742a1d-1edc-4c32-bb30-87ab682c735f","Type":"ContainerDied","Data":"a49b4f1596ddf10c5eae7d3d6a919e8738a416c4b842e4bce20a36c2e9d8d914"} Jan 30 17:17:05 crc kubenswrapper[4875]: E0130 17:17:05.754105 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.760331 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1cf36-account-delete-hxgmj" event={"ID":"82cd9ebf-f9ce-4e60-8cce-348dfada6f12","Type":"ContainerDied","Data":"ebc6cc7560102739b39cf71d47cbaa9b2a159eed679ed6d57acc7a70842c1063"} Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.760359 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebc6cc7560102739b39cf71d47cbaa9b2a159eed679ed6d57acc7a70842c1063" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.760417 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1cf36-account-delete-hxgmj" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.764890 4875 generic.go:334] "Generic (PLEG): container finished" podID="43fc7a38-c949-4c28-8449-f23a5224cf13" containerID="005280cd9973d176e553373f86085b3bbe6de5fe194b3b8b97e602f815daf506" exitCode=0 Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.764934 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"43fc7a38-c949-4c28-8449-f23a5224cf13","Type":"ContainerDied","Data":"005280cd9973d176e553373f86085b3bbe6de5fe194b3b8b97e602f815daf506"} Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.764963 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"43fc7a38-c949-4c28-8449-f23a5224cf13","Type":"ContainerDied","Data":"c138b0440039255318a724fb6f2cd14df2d68449586628a89c9a4d78b63cefc8"} Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.765021 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:17:05 crc kubenswrapper[4875]: E0130 17:17:05.773299 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:17:05 crc kubenswrapper[4875]: E0130 17:17:05.784901 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:17:05 crc kubenswrapper[4875]: E0130 17:17:05.784973 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="9cc68f17-43a8-4027-9a59-481aeb6771d5" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.788681 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-logs\") pod \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\" (UID: \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\") " Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.788801 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfrrw\" (UniqueName: \"kubernetes.io/projected/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-kube-api-access-sfrrw\") pod \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\" (UID: \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\") " Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.788833 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-config-data\") pod \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\" (UID: \"fca688e9-e9fb-417b-9cfe-a56b5e098a3a\") " Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.789204 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43fc7a38-c949-4c28-8449-f23a5224cf13-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.789228 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qzl7\" (UniqueName: \"kubernetes.io/projected/43fc7a38-c949-4c28-8449-f23a5224cf13-kube-api-access-5qzl7\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.790070 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-logs" (OuterVolumeSpecName: "logs") pod "fca688e9-e9fb-417b-9cfe-a56b5e098a3a" (UID: "fca688e9-e9fb-417b-9cfe-a56b5e098a3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.791869 4875 scope.go:117] "RemoveContainer" containerID="0580e823465999a0142046e50d8b646ece086dea2742a0ba33d6b64ab8d7e5bf" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.806812 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-kube-api-access-sfrrw" (OuterVolumeSpecName: "kube-api-access-sfrrw") pod "fca688e9-e9fb-417b-9cfe-a56b5e098a3a" (UID: "fca688e9-e9fb-417b-9cfe-a56b5e098a3a"). InnerVolumeSpecName "kube-api-access-sfrrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.818703 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.831050 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-config-data" (OuterVolumeSpecName: "config-data") pod "fca688e9-e9fb-417b-9cfe-a56b5e098a3a" (UID: "fca688e9-e9fb-417b-9cfe-a56b5e098a3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.833111 4875 scope.go:117] "RemoveContainer" containerID="5a50e6a06d272ea218fd0f9f392d93d34a993d89270d2421c68bbd4bd09dce59" Jan 30 17:17:05 crc kubenswrapper[4875]: E0130 17:17:05.833453 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a50e6a06d272ea218fd0f9f392d93d34a993d89270d2421c68bbd4bd09dce59\": container with ID starting with 5a50e6a06d272ea218fd0f9f392d93d34a993d89270d2421c68bbd4bd09dce59 not found: ID does not exist" containerID="5a50e6a06d272ea218fd0f9f392d93d34a993d89270d2421c68bbd4bd09dce59" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.833490 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a50e6a06d272ea218fd0f9f392d93d34a993d89270d2421c68bbd4bd09dce59"} err="failed to get container status \"5a50e6a06d272ea218fd0f9f392d93d34a993d89270d2421c68bbd4bd09dce59\": rpc error: code = NotFound desc = could not find container \"5a50e6a06d272ea218fd0f9f392d93d34a993d89270d2421c68bbd4bd09dce59\": container with ID starting with 5a50e6a06d272ea218fd0f9f392d93d34a993d89270d2421c68bbd4bd09dce59 not found: ID does not exist" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.833514 4875 scope.go:117] "RemoveContainer" containerID="0580e823465999a0142046e50d8b646ece086dea2742a0ba33d6b64ab8d7e5bf" Jan 30 17:17:05 crc kubenswrapper[4875]: E0130 17:17:05.833952 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0580e823465999a0142046e50d8b646ece086dea2742a0ba33d6b64ab8d7e5bf\": container with ID starting with 0580e823465999a0142046e50d8b646ece086dea2742a0ba33d6b64ab8d7e5bf not found: ID does not exist" containerID="0580e823465999a0142046e50d8b646ece086dea2742a0ba33d6b64ab8d7e5bf" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.834011 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0580e823465999a0142046e50d8b646ece086dea2742a0ba33d6b64ab8d7e5bf"} err="failed to get container status \"0580e823465999a0142046e50d8b646ece086dea2742a0ba33d6b64ab8d7e5bf\": rpc error: code = NotFound desc = could not find container \"0580e823465999a0142046e50d8b646ece086dea2742a0ba33d6b64ab8d7e5bf\": container with ID starting with 0580e823465999a0142046e50d8b646ece086dea2742a0ba33d6b64ab8d7e5bf not found: ID does not exist" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.834041 4875 scope.go:117] "RemoveContainer" containerID="005280cd9973d176e553373f86085b3bbe6de5fe194b3b8b97e602f815daf506" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.834099 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.867541 4875 scope.go:117] "RemoveContainer" containerID="005280cd9973d176e553373f86085b3bbe6de5fe194b3b8b97e602f815daf506" Jan 30 17:17:05 crc kubenswrapper[4875]: E0130 17:17:05.868234 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"005280cd9973d176e553373f86085b3bbe6de5fe194b3b8b97e602f815daf506\": container with ID starting with 005280cd9973d176e553373f86085b3bbe6de5fe194b3b8b97e602f815daf506 not found: ID does not exist" containerID="005280cd9973d176e553373f86085b3bbe6de5fe194b3b8b97e602f815daf506" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.868264 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"005280cd9973d176e553373f86085b3bbe6de5fe194b3b8b97e602f815daf506"} err="failed to get container status \"005280cd9973d176e553373f86085b3bbe6de5fe194b3b8b97e602f815daf506\": rpc error: code = NotFound desc = could not find container \"005280cd9973d176e553373f86085b3bbe6de5fe194b3b8b97e602f815daf506\": container with ID starting with 005280cd9973d176e553373f86085b3bbe6de5fe194b3b8b97e602f815daf506 not found: ID does not exist" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.886109 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.889792 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hgnv\" (UniqueName: \"kubernetes.io/projected/5c742a1d-1edc-4c32-bb30-87ab682c735f-kube-api-access-6hgnv\") pod \"5c742a1d-1edc-4c32-bb30-87ab682c735f\" (UID: \"5c742a1d-1edc-4c32-bb30-87ab682c735f\") " Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.889868 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c742a1d-1edc-4c32-bb30-87ab682c735f-logs\") pod \"5c742a1d-1edc-4c32-bb30-87ab682c735f\" (UID: \"5c742a1d-1edc-4c32-bb30-87ab682c735f\") " Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.889943 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c742a1d-1edc-4c32-bb30-87ab682c735f-config-data\") pod \"5c742a1d-1edc-4c32-bb30-87ab682c735f\" (UID: \"5c742a1d-1edc-4c32-bb30-87ab682c735f\") " Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.890345 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.890365 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfrrw\" (UniqueName: \"kubernetes.io/projected/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-kube-api-access-sfrrw\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.890376 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fca688e9-e9fb-417b-9cfe-a56b5e098a3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.890665 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c742a1d-1edc-4c32-bb30-87ab682c735f-logs" (OuterVolumeSpecName: "logs") pod "5c742a1d-1edc-4c32-bb30-87ab682c735f" (UID: "5c742a1d-1edc-4c32-bb30-87ab682c735f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.892208 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c742a1d-1edc-4c32-bb30-87ab682c735f-kube-api-access-6hgnv" (OuterVolumeSpecName: "kube-api-access-6hgnv") pod "5c742a1d-1edc-4c32-bb30-87ab682c735f" (UID: "5c742a1d-1edc-4c32-bb30-87ab682c735f"). InnerVolumeSpecName "kube-api-access-6hgnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.931377 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c742a1d-1edc-4c32-bb30-87ab682c735f-config-data" (OuterVolumeSpecName: "config-data") pod "5c742a1d-1edc-4c32-bb30-87ab682c735f" (UID: "5c742a1d-1edc-4c32-bb30-87ab682c735f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.991693 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hgnv\" (UniqueName: \"kubernetes.io/projected/5c742a1d-1edc-4c32-bb30-87ab682c735f-kube-api-access-6hgnv\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.991730 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c742a1d-1edc-4c32-bb30-87ab682c735f-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4875]: I0130 17:17:05.991744 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c742a1d-1edc-4c32-bb30-87ab682c735f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.101789 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.104785 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.110261 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:17:06 crc kubenswrapper[4875]: E0130 17:17:06.110808 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43fc7a38-c949-4c28-8449-f23a5224cf13" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.110833 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="43fc7a38-c949-4c28-8449-f23a5224cf13" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:17:06 crc kubenswrapper[4875]: E0130 17:17:06.110845 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbb2c92-8124-49f3-b278-c77b4b0d8a52" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.110853 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbb2c92-8124-49f3-b278-c77b4b0d8a52" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 30 17:17:06 crc kubenswrapper[4875]: E0130 17:17:06.110868 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca688e9-e9fb-417b-9cfe-a56b5e098a3a" containerName="nova-kuttl-api-log" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.110874 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca688e9-e9fb-417b-9cfe-a56b5e098a3a" containerName="nova-kuttl-api-log" Jan 30 17:17:06 crc kubenswrapper[4875]: E0130 17:17:06.110881 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca688e9-e9fb-417b-9cfe-a56b5e098a3a" containerName="nova-kuttl-api-api" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.110887 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca688e9-e9fb-417b-9cfe-a56b5e098a3a" containerName="nova-kuttl-api-api" Jan 30 17:17:06 crc kubenswrapper[4875]: E0130 17:17:06.110895 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82cd9ebf-f9ce-4e60-8cce-348dfada6f12" containerName="mariadb-account-delete" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.110902 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="82cd9ebf-f9ce-4e60-8cce-348dfada6f12" containerName="mariadb-account-delete" Jan 30 17:17:06 crc kubenswrapper[4875]: E0130 17:17:06.110918 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerName="nova-kuttl-metadata-log" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.110923 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerName="nova-kuttl-metadata-log" Jan 30 17:17:06 crc kubenswrapper[4875]: E0130 17:17:06.110933 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerName="nova-kuttl-metadata-metadata" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.110939 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerName="nova-kuttl-metadata-metadata" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.111086 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="43fc7a38-c949-4c28-8449-f23a5224cf13" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.111095 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerName="nova-kuttl-metadata-log" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.111107 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbb2c92-8124-49f3-b278-c77b4b0d8a52" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.111116 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c742a1d-1edc-4c32-bb30-87ab682c735f" containerName="nova-kuttl-metadata-metadata" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.111122 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca688e9-e9fb-417b-9cfe-a56b5e098a3a" containerName="nova-kuttl-api-log" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.111134 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca688e9-e9fb-417b-9cfe-a56b5e098a3a" containerName="nova-kuttl-api-api" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.111140 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="82cd9ebf-f9ce-4e60-8cce-348dfada6f12" containerName="mariadb-account-delete" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.112014 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.114859 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.118922 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.147607 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43fc7a38-c949-4c28-8449-f23a5224cf13" path="/var/lib/kubelet/pods/43fc7a38-c949-4c28-8449-f23a5224cf13/volumes" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.148614 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca688e9-e9fb-417b-9cfe-a56b5e098a3a" path="/var/lib/kubelet/pods/fca688e9-e9fb-417b-9cfe-a56b5e098a3a/volumes" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.194031 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-logs\") pod \"nova-kuttl-api-0\" (UID: \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.194099 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-config-data\") pod \"nova-kuttl-api-0\" (UID: \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.194208 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89t5p\" (UniqueName: \"kubernetes.io/projected/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-kube-api-access-89t5p\") pod \"nova-kuttl-api-0\" (UID: \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.295041 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-logs\") pod \"nova-kuttl-api-0\" (UID: \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.295127 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-config-data\") pod \"nova-kuttl-api-0\" (UID: \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.295267 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89t5p\" (UniqueName: \"kubernetes.io/projected/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-kube-api-access-89t5p\") pod \"nova-kuttl-api-0\" (UID: \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.295616 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-logs\") pod \"nova-kuttl-api-0\" (UID: \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.302293 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-config-data\") pod \"nova-kuttl-api-0\" (UID: \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.310713 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89t5p\" (UniqueName: \"kubernetes.io/projected/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-kube-api-access-89t5p\") pod \"nova-kuttl-api-0\" (UID: \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.429834 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.775735 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.775794 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5c742a1d-1edc-4c32-bb30-87ab682c735f","Type":"ContainerDied","Data":"98e4d4cb84e739ada7c25e5793f00314dd30dcdc17cf22557aeaa63444bda65d"} Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.776535 4875 scope.go:117] "RemoveContainer" containerID="a49b4f1596ddf10c5eae7d3d6a919e8738a416c4b842e4bce20a36c2e9d8d914" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.822128 4875 scope.go:117] "RemoveContainer" containerID="1fe32171492a3820cef455ef7bd58a02cc12843a0a6f2cbdc9172bdad8a2aa70" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.823576 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.833268 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.852789 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.858699 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.864996 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.878221 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.906979 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4391a03b-0c86-4610-a99f-0e4a1e1abce3-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.907026 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4391a03b-0c86-4610-a99f-0e4a1e1abce3-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.907155 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5znb\" (UniqueName: \"kubernetes.io/projected/4391a03b-0c86-4610-a99f-0e4a1e1abce3-kube-api-access-x5znb\") pod \"nova-kuttl-metadata-0\" (UID: \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:06 crc kubenswrapper[4875]: I0130 17:17:06.925780 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.007532 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5znb\" (UniqueName: \"kubernetes.io/projected/4391a03b-0c86-4610-a99f-0e4a1e1abce3-kube-api-access-x5znb\") pod \"nova-kuttl-metadata-0\" (UID: \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.007577 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4391a03b-0c86-4610-a99f-0e4a1e1abce3-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.007652 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4391a03b-0c86-4610-a99f-0e4a1e1abce3-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.008030 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4391a03b-0c86-4610-a99f-0e4a1e1abce3-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.012570 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4391a03b-0c86-4610-a99f-0e4a1e1abce3-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.023282 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5znb\" (UniqueName: \"kubernetes.io/projected/4391a03b-0c86-4610-a99f-0e4a1e1abce3-kube-api-access-x5znb\") pod \"nova-kuttl-metadata-0\" (UID: \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.052223 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-kgb4q"] Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.067698 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-kgb4q"] Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.080642 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell1cf36-account-delete-hxgmj"] Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.088792 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell1cf36-account-delete-hxgmj"] Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.096426 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt"] Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.103138 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-cf36-account-create-update-sfmpt"] Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.185709 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.598091 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.794927 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d6776e9b-c6c4-4b79-a16e-95c8d899bb94","Type":"ContainerStarted","Data":"7d12b4edec8ba321d54cf7edc3d53cda4852cc1afa82bf3ff649751a28a48332"} Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.794964 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d6776e9b-c6c4-4b79-a16e-95c8d899bb94","Type":"ContainerStarted","Data":"bce312694ce95c8b4e2417285a4914297fc2566503b63b3ccfef6a3d2112a1dc"} Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.794976 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d6776e9b-c6c4-4b79-a16e-95c8d899bb94","Type":"ContainerStarted","Data":"527e336ba7f908461adae85060c560629331916b56069e42fabc0286d840ae2e"} Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.802391 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4391a03b-0c86-4610-a99f-0e4a1e1abce3","Type":"ContainerStarted","Data":"bc7a92d89dd35a9af68f741d558bf205a2df49311806bd30ace880159130871b"} Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.802419 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4391a03b-0c86-4610-a99f-0e4a1e1abce3","Type":"ContainerStarted","Data":"6c6087e62283e156a7c7931d6a5e66ce610bf8486076b62f7ed5550ad71ac40e"} Jan 30 17:17:07 crc kubenswrapper[4875]: I0130 17:17:07.822225 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=1.822191209 podStartE2EDuration="1.822191209s" podCreationTimestamp="2026-01-30 17:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:17:07.812147666 +0000 UTC m=+1238.359511059" watchObservedRunningTime="2026-01-30 17:17:07.822191209 +0000 UTC m=+1238.369554582" Jan 30 17:17:08 crc kubenswrapper[4875]: I0130 17:17:08.145453 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2984f7e2-f590-4d66-ab1b-76ee8d3a7869" path="/var/lib/kubelet/pods/2984f7e2-f590-4d66-ab1b-76ee8d3a7869/volumes" Jan 30 17:17:08 crc kubenswrapper[4875]: I0130 17:17:08.145971 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="529f3b7f-281a-4cd3-a0be-885fc730c789" path="/var/lib/kubelet/pods/529f3b7f-281a-4cd3-a0be-885fc730c789/volumes" Jan 30 17:17:08 crc kubenswrapper[4875]: I0130 17:17:08.146529 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c742a1d-1edc-4c32-bb30-87ab682c735f" path="/var/lib/kubelet/pods/5c742a1d-1edc-4c32-bb30-87ab682c735f/volumes" Jan 30 17:17:08 crc kubenswrapper[4875]: I0130 17:17:08.147635 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82cd9ebf-f9ce-4e60-8cce-348dfada6f12" path="/var/lib/kubelet/pods/82cd9ebf-f9ce-4e60-8cce-348dfada6f12/volumes" Jan 30 17:17:08 crc kubenswrapper[4875]: I0130 17:17:08.815736 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4391a03b-0c86-4610-a99f-0e4a1e1abce3","Type":"ContainerStarted","Data":"42a65d8fb1d7828764831a43992659c1e3ad1479b245f7f9ba899d441394d899"} Jan 30 17:17:08 crc kubenswrapper[4875]: I0130 17:17:08.844810 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.844784382 podStartE2EDuration="2.844784382s" podCreationTimestamp="2026-01-30 17:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:17:08.835389856 +0000 UTC m=+1239.382753249" watchObservedRunningTime="2026-01-30 17:17:08.844784382 +0000 UTC m=+1239.392147775" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.530179 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.661160 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cc68f17-43a8-4027-9a59-481aeb6771d5-config-data\") pod \"9cc68f17-43a8-4027-9a59-481aeb6771d5\" (UID: \"9cc68f17-43a8-4027-9a59-481aeb6771d5\") " Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.661579 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgk9c\" (UniqueName: \"kubernetes.io/projected/9cc68f17-43a8-4027-9a59-481aeb6771d5-kube-api-access-pgk9c\") pod \"9cc68f17-43a8-4027-9a59-481aeb6771d5\" (UID: \"9cc68f17-43a8-4027-9a59-481aeb6771d5\") " Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.666901 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cc68f17-43a8-4027-9a59-481aeb6771d5-kube-api-access-pgk9c" (OuterVolumeSpecName: "kube-api-access-pgk9c") pod "9cc68f17-43a8-4027-9a59-481aeb6771d5" (UID: "9cc68f17-43a8-4027-9a59-481aeb6771d5"). InnerVolumeSpecName "kube-api-access-pgk9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.686517 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cc68f17-43a8-4027-9a59-481aeb6771d5-config-data" (OuterVolumeSpecName: "config-data") pod "9cc68f17-43a8-4027-9a59-481aeb6771d5" (UID: "9cc68f17-43a8-4027-9a59-481aeb6771d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.764143 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgk9c\" (UniqueName: \"kubernetes.io/projected/9cc68f17-43a8-4027-9a59-481aeb6771d5-kube-api-access-pgk9c\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.764206 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cc68f17-43a8-4027-9a59-481aeb6771d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.826657 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.826678 4875 generic.go:334] "Generic (PLEG): container finished" podID="9cc68f17-43a8-4027-9a59-481aeb6771d5" containerID="c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11" exitCode=0 Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.826734 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"9cc68f17-43a8-4027-9a59-481aeb6771d5","Type":"ContainerDied","Data":"c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11"} Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.826761 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"9cc68f17-43a8-4027-9a59-481aeb6771d5","Type":"ContainerDied","Data":"9563e4c166bb38cf30b1ecdecf2ee857b1a4590cce9053aec3c90aa391e21c49"} Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.826795 4875 scope.go:117] "RemoveContainer" containerID="c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.854048 4875 scope.go:117] "RemoveContainer" containerID="c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11" Jan 30 17:17:09 crc kubenswrapper[4875]: E0130 17:17:09.859208 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11\": container with ID starting with c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11 not found: ID does not exist" containerID="c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.859261 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11"} err="failed to get container status \"c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11\": rpc error: code = NotFound desc = could not find container \"c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11\": container with ID starting with c2f4e289699c9c0bf7f3f211a8bdf361ee0e970600fdb86d969fa340d51d7e11 not found: ID does not exist" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.888575 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.909967 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.915597 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:17:09 crc kubenswrapper[4875]: E0130 17:17:09.916107 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cc68f17-43a8-4027-9a59-481aeb6771d5" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.916136 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc68f17-43a8-4027-9a59-481aeb6771d5" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.916359 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cc68f17-43a8-4027-9a59-481aeb6771d5" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.917121 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.919692 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 30 17:17:09 crc kubenswrapper[4875]: I0130 17:17:09.923679 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:17:10 crc kubenswrapper[4875]: I0130 17:17:10.067683 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5xkk\" (UniqueName: \"kubernetes.io/projected/266fb2db-b1d7-4a1d-8581-2ef284916384-kube-api-access-p5xkk\") pod \"nova-kuttl-scheduler-0\" (UID: \"266fb2db-b1d7-4a1d-8581-2ef284916384\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:10 crc kubenswrapper[4875]: I0130 17:17:10.067719 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266fb2db-b1d7-4a1d-8581-2ef284916384-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"266fb2db-b1d7-4a1d-8581-2ef284916384\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:10 crc kubenswrapper[4875]: I0130 17:17:10.147799 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cc68f17-43a8-4027-9a59-481aeb6771d5" path="/var/lib/kubelet/pods/9cc68f17-43a8-4027-9a59-481aeb6771d5/volumes" Jan 30 17:17:10 crc kubenswrapper[4875]: I0130 17:17:10.170745 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266fb2db-b1d7-4a1d-8581-2ef284916384-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"266fb2db-b1d7-4a1d-8581-2ef284916384\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:10 crc kubenswrapper[4875]: I0130 17:17:10.170803 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5xkk\" (UniqueName: \"kubernetes.io/projected/266fb2db-b1d7-4a1d-8581-2ef284916384-kube-api-access-p5xkk\") pod \"nova-kuttl-scheduler-0\" (UID: \"266fb2db-b1d7-4a1d-8581-2ef284916384\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:10 crc kubenswrapper[4875]: I0130 17:17:10.175342 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266fb2db-b1d7-4a1d-8581-2ef284916384-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"266fb2db-b1d7-4a1d-8581-2ef284916384\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:10 crc kubenswrapper[4875]: I0130 17:17:10.191563 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5xkk\" (UniqueName: \"kubernetes.io/projected/266fb2db-b1d7-4a1d-8581-2ef284916384-kube-api-access-p5xkk\") pod \"nova-kuttl-scheduler-0\" (UID: \"266fb2db-b1d7-4a1d-8581-2ef284916384\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:10 crc kubenswrapper[4875]: I0130 17:17:10.231546 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:10 crc kubenswrapper[4875]: I0130 17:17:10.711122 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:17:10 crc kubenswrapper[4875]: I0130 17:17:10.838576 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"266fb2db-b1d7-4a1d-8581-2ef284916384","Type":"ContainerStarted","Data":"b8c6754749e85c9676d0f4403791206d2920e2636e7b792d6acf15ea1c1bb9dc"} Jan 30 17:17:11 crc kubenswrapper[4875]: I0130 17:17:11.848394 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"266fb2db-b1d7-4a1d-8581-2ef284916384","Type":"ContainerStarted","Data":"039adc2b6b4d851dd4d207487ca5257b522af43ebc72c98b7c4f8db7c96ef7ba"} Jan 30 17:17:11 crc kubenswrapper[4875]: I0130 17:17:11.869626 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.869605632 podStartE2EDuration="2.869605632s" podCreationTimestamp="2026-01-30 17:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:17:11.860438913 +0000 UTC m=+1242.407802306" watchObservedRunningTime="2026-01-30 17:17:11.869605632 +0000 UTC m=+1242.416969015" Jan 30 17:17:12 crc kubenswrapper[4875]: I0130 17:17:12.186347 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:12 crc kubenswrapper[4875]: I0130 17:17:12.186479 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:15 crc kubenswrapper[4875]: I0130 17:17:15.232702 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:16 crc kubenswrapper[4875]: I0130 17:17:16.430758 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:16 crc kubenswrapper[4875]: I0130 17:17:16.430851 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:17 crc kubenswrapper[4875]: I0130 17:17:17.186019 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:17 crc kubenswrapper[4875]: I0130 17:17:17.186090 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:17 crc kubenswrapper[4875]: I0130 17:17:17.512806 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="d6776e9b-c6c4-4b79-a16e-95c8d899bb94" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.145:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:17 crc kubenswrapper[4875]: I0130 17:17:17.512953 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="d6776e9b-c6c4-4b79-a16e-95c8d899bb94" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.145:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:18 crc kubenswrapper[4875]: I0130 17:17:18.268842 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.146:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:18 crc kubenswrapper[4875]: I0130 17:17:18.268838 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.146:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:20 crc kubenswrapper[4875]: I0130 17:17:20.232179 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:20 crc kubenswrapper[4875]: I0130 17:17:20.253825 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:20 crc kubenswrapper[4875]: I0130 17:17:20.966012 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:17:26 crc kubenswrapper[4875]: I0130 17:17:26.287479 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:17:26 crc kubenswrapper[4875]: I0130 17:17:26.287813 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:17:26 crc kubenswrapper[4875]: I0130 17:17:26.435253 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:26 crc kubenswrapper[4875]: I0130 17:17:26.435803 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:26 crc kubenswrapper[4875]: I0130 17:17:26.435904 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:26 crc kubenswrapper[4875]: I0130 17:17:26.442836 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:26 crc kubenswrapper[4875]: I0130 17:17:26.971407 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:26 crc kubenswrapper[4875]: I0130 17:17:26.974858 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:17:27 crc kubenswrapper[4875]: I0130 17:17:27.192742 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:27 crc kubenswrapper[4875]: I0130 17:17:27.194112 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:27 crc kubenswrapper[4875]: I0130 17:17:27.194708 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:27 crc kubenswrapper[4875]: I0130 17:17:27.984435 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:17:30 crc kubenswrapper[4875]: I0130 17:17:30.891623 4875 scope.go:117] "RemoveContainer" containerID="306cbe63187b4027521531669a32ad3a9bfa8762d77cd183d3bb39361df79e0a" Jan 30 17:17:30 crc kubenswrapper[4875]: I0130 17:17:30.920079 4875 scope.go:117] "RemoveContainer" containerID="fde009239b80ebb365dbb77159f6056a09d228efff1c6095f4d92e1d6e5a723d" Jan 30 17:17:30 crc kubenswrapper[4875]: I0130 17:17:30.956601 4875 scope.go:117] "RemoveContainer" containerID="7666eed00d013f5d21eb9bec5993826524c7a9fe4389c6d095fe134e16326e0a" Jan 30 17:17:56 crc kubenswrapper[4875]: I0130 17:17:56.286905 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:17:56 crc kubenswrapper[4875]: I0130 17:17:56.287482 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:18:26 crc kubenswrapper[4875]: I0130 17:18:26.287524 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:18:26 crc kubenswrapper[4875]: I0130 17:18:26.288777 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:18:26 crc kubenswrapper[4875]: I0130 17:18:26.288872 4875 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 17:18:26 crc kubenswrapper[4875]: I0130 17:18:26.290223 4875 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"48e3a087955728186281898d070efcfe8a3f5df09e6720b6da52c18157fc11ce"} pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:18:26 crc kubenswrapper[4875]: I0130 17:18:26.290305 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" containerID="cri-o://48e3a087955728186281898d070efcfe8a3f5df09e6720b6da52c18157fc11ce" gracePeriod=600 Jan 30 17:18:26 crc kubenswrapper[4875]: I0130 17:18:26.616884 4875 generic.go:334] "Generic (PLEG): container finished" podID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerID="48e3a087955728186281898d070efcfe8a3f5df09e6720b6da52c18157fc11ce" exitCode=0 Jan 30 17:18:26 crc kubenswrapper[4875]: I0130 17:18:26.616954 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerDied","Data":"48e3a087955728186281898d070efcfe8a3f5df09e6720b6da52c18157fc11ce"} Jan 30 17:18:26 crc kubenswrapper[4875]: I0130 17:18:26.617220 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerStarted","Data":"229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78"} Jan 30 17:18:26 crc kubenswrapper[4875]: I0130 17:18:26.617246 4875 scope.go:117] "RemoveContainer" containerID="6514542be49997aad4594ad0a6547ac470439752a0efaf44fa7c391eb010bcf6" Jan 30 17:18:31 crc kubenswrapper[4875]: I0130 17:18:31.088653 4875 scope.go:117] "RemoveContainer" containerID="ac01eaeb76ee4b502893775cc7b4fd2a9c426eda11d37e1d3233a97604a95d90" Jan 30 17:19:31 crc kubenswrapper[4875]: I0130 17:19:31.135739 4875 scope.go:117] "RemoveContainer" containerID="dac3a03bf6b19c8eefa4d87a19106ed98c4b4313745986e919fc1d08b0db2e74" Jan 30 17:19:59 crc kubenswrapper[4875]: I0130 17:19:59.695319 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jdfmj"] Jan 30 17:19:59 crc kubenswrapper[4875]: I0130 17:19:59.701218 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:19:59 crc kubenswrapper[4875]: I0130 17:19:59.713019 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdfmj"] Jan 30 17:19:59 crc kubenswrapper[4875]: I0130 17:19:59.817888 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76fe2519-334c-4a37-b267-a0aeb9005095-catalog-content\") pod \"community-operators-jdfmj\" (UID: \"76fe2519-334c-4a37-b267-a0aeb9005095\") " pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:19:59 crc kubenswrapper[4875]: I0130 17:19:59.817931 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qz9g\" (UniqueName: \"kubernetes.io/projected/76fe2519-334c-4a37-b267-a0aeb9005095-kube-api-access-4qz9g\") pod \"community-operators-jdfmj\" (UID: \"76fe2519-334c-4a37-b267-a0aeb9005095\") " pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:19:59 crc kubenswrapper[4875]: I0130 17:19:59.817966 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76fe2519-334c-4a37-b267-a0aeb9005095-utilities\") pod \"community-operators-jdfmj\" (UID: \"76fe2519-334c-4a37-b267-a0aeb9005095\") " pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:19:59 crc kubenswrapper[4875]: I0130 17:19:59.920223 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76fe2519-334c-4a37-b267-a0aeb9005095-utilities\") pod \"community-operators-jdfmj\" (UID: \"76fe2519-334c-4a37-b267-a0aeb9005095\") " pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:19:59 crc kubenswrapper[4875]: I0130 17:19:59.920413 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76fe2519-334c-4a37-b267-a0aeb9005095-catalog-content\") pod \"community-operators-jdfmj\" (UID: \"76fe2519-334c-4a37-b267-a0aeb9005095\") " pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:19:59 crc kubenswrapper[4875]: I0130 17:19:59.920442 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qz9g\" (UniqueName: \"kubernetes.io/projected/76fe2519-334c-4a37-b267-a0aeb9005095-kube-api-access-4qz9g\") pod \"community-operators-jdfmj\" (UID: \"76fe2519-334c-4a37-b267-a0aeb9005095\") " pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:19:59 crc kubenswrapper[4875]: I0130 17:19:59.920787 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76fe2519-334c-4a37-b267-a0aeb9005095-utilities\") pod \"community-operators-jdfmj\" (UID: \"76fe2519-334c-4a37-b267-a0aeb9005095\") " pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:19:59 crc kubenswrapper[4875]: I0130 17:19:59.921008 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76fe2519-334c-4a37-b267-a0aeb9005095-catalog-content\") pod \"community-operators-jdfmj\" (UID: \"76fe2519-334c-4a37-b267-a0aeb9005095\") " pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:19:59 crc kubenswrapper[4875]: I0130 17:19:59.941494 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qz9g\" (UniqueName: \"kubernetes.io/projected/76fe2519-334c-4a37-b267-a0aeb9005095-kube-api-access-4qz9g\") pod \"community-operators-jdfmj\" (UID: \"76fe2519-334c-4a37-b267-a0aeb9005095\") " pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:20:00 crc kubenswrapper[4875]: I0130 17:20:00.024083 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:20:00 crc kubenswrapper[4875]: I0130 17:20:00.532839 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdfmj"] Jan 30 17:20:01 crc kubenswrapper[4875]: I0130 17:20:01.485549 4875 generic.go:334] "Generic (PLEG): container finished" podID="76fe2519-334c-4a37-b267-a0aeb9005095" containerID="da91eed8c1b5bc342cb02df56bf93870aea8232cb8b584e9f175df0ed2604537" exitCode=0 Jan 30 17:20:01 crc kubenswrapper[4875]: I0130 17:20:01.485629 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdfmj" event={"ID":"76fe2519-334c-4a37-b267-a0aeb9005095","Type":"ContainerDied","Data":"da91eed8c1b5bc342cb02df56bf93870aea8232cb8b584e9f175df0ed2604537"} Jan 30 17:20:01 crc kubenswrapper[4875]: I0130 17:20:01.486087 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdfmj" event={"ID":"76fe2519-334c-4a37-b267-a0aeb9005095","Type":"ContainerStarted","Data":"5f9319bba44d5e076ba8258953ddf5605c0a335a9854db24ceae544cb9f26fe6"} Jan 30 17:20:02 crc kubenswrapper[4875]: I0130 17:20:02.498100 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdfmj" event={"ID":"76fe2519-334c-4a37-b267-a0aeb9005095","Type":"ContainerStarted","Data":"876b95c9bfe0c620565621a9a5a333240b443e37ca135121439ee6143c98321e"} Jan 30 17:20:03 crc kubenswrapper[4875]: I0130 17:20:03.511384 4875 generic.go:334] "Generic (PLEG): container finished" podID="76fe2519-334c-4a37-b267-a0aeb9005095" containerID="876b95c9bfe0c620565621a9a5a333240b443e37ca135121439ee6143c98321e" exitCode=0 Jan 30 17:20:03 crc kubenswrapper[4875]: I0130 17:20:03.511451 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdfmj" event={"ID":"76fe2519-334c-4a37-b267-a0aeb9005095","Type":"ContainerDied","Data":"876b95c9bfe0c620565621a9a5a333240b443e37ca135121439ee6143c98321e"} Jan 30 17:20:04 crc kubenswrapper[4875]: I0130 17:20:04.525391 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdfmj" event={"ID":"76fe2519-334c-4a37-b267-a0aeb9005095","Type":"ContainerStarted","Data":"e71b7dbe2e422b78a7ea89a2fabee4730b32339ea1a2614ff09f5773dee46839"} Jan 30 17:20:04 crc kubenswrapper[4875]: I0130 17:20:04.555868 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jdfmj" podStartSLOduration=3.117866127 podStartE2EDuration="5.555846266s" podCreationTimestamp="2026-01-30 17:19:59 +0000 UTC" firstStartedPulling="2026-01-30 17:20:01.48975842 +0000 UTC m=+1412.037121823" lastFinishedPulling="2026-01-30 17:20:03.927738549 +0000 UTC m=+1414.475101962" observedRunningTime="2026-01-30 17:20:04.550314037 +0000 UTC m=+1415.097677450" watchObservedRunningTime="2026-01-30 17:20:04.555846266 +0000 UTC m=+1415.103209659" Jan 30 17:20:05 crc kubenswrapper[4875]: I0130 17:20:05.267545 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ch78f"] Jan 30 17:20:05 crc kubenswrapper[4875]: I0130 17:20:05.270329 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:05 crc kubenswrapper[4875]: I0130 17:20:05.275787 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ch78f"] Jan 30 17:20:05 crc kubenswrapper[4875]: I0130 17:20:05.424271 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-utilities\") pod \"redhat-operators-ch78f\" (UID: \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\") " pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:05 crc kubenswrapper[4875]: I0130 17:20:05.424542 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d4x6\" (UniqueName: \"kubernetes.io/projected/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-kube-api-access-4d4x6\") pod \"redhat-operators-ch78f\" (UID: \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\") " pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:05 crc kubenswrapper[4875]: I0130 17:20:05.424651 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-catalog-content\") pod \"redhat-operators-ch78f\" (UID: \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\") " pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:05 crc kubenswrapper[4875]: I0130 17:20:05.525690 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d4x6\" (UniqueName: \"kubernetes.io/projected/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-kube-api-access-4d4x6\") pod \"redhat-operators-ch78f\" (UID: \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\") " pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:05 crc kubenswrapper[4875]: I0130 17:20:05.525856 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-catalog-content\") pod \"redhat-operators-ch78f\" (UID: \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\") " pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:05 crc kubenswrapper[4875]: I0130 17:20:05.526002 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-utilities\") pod \"redhat-operators-ch78f\" (UID: \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\") " pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:05 crc kubenswrapper[4875]: I0130 17:20:05.526473 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-catalog-content\") pod \"redhat-operators-ch78f\" (UID: \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\") " pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:05 crc kubenswrapper[4875]: I0130 17:20:05.526640 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-utilities\") pod \"redhat-operators-ch78f\" (UID: \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\") " pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:05 crc kubenswrapper[4875]: I0130 17:20:05.544466 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d4x6\" (UniqueName: \"kubernetes.io/projected/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-kube-api-access-4d4x6\") pod \"redhat-operators-ch78f\" (UID: \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\") " pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:05 crc kubenswrapper[4875]: I0130 17:20:05.591953 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:06 crc kubenswrapper[4875]: I0130 17:20:06.034357 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ch78f"] Jan 30 17:20:06 crc kubenswrapper[4875]: I0130 17:20:06.542561 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch78f" event={"ID":"c317cdb1-d0ec-43c5-bd8c-49bef15233f3","Type":"ContainerStarted","Data":"d15b21659eba410850e68c1e5fa184cbe74290ca5587b3495c17d1fae169e854"} Jan 30 17:20:06 crc kubenswrapper[4875]: I0130 17:20:06.542885 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch78f" event={"ID":"c317cdb1-d0ec-43c5-bd8c-49bef15233f3","Type":"ContainerStarted","Data":"5a40109b80710504d474d0834bc45d2c0221005b1dfad3f70c3c21eacd76da86"} Jan 30 17:20:07 crc kubenswrapper[4875]: I0130 17:20:07.551799 4875 generic.go:334] "Generic (PLEG): container finished" podID="c317cdb1-d0ec-43c5-bd8c-49bef15233f3" containerID="d15b21659eba410850e68c1e5fa184cbe74290ca5587b3495c17d1fae169e854" exitCode=0 Jan 30 17:20:07 crc kubenswrapper[4875]: I0130 17:20:07.551844 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch78f" event={"ID":"c317cdb1-d0ec-43c5-bd8c-49bef15233f3","Type":"ContainerDied","Data":"d15b21659eba410850e68c1e5fa184cbe74290ca5587b3495c17d1fae169e854"} Jan 30 17:20:09 crc kubenswrapper[4875]: I0130 17:20:09.570392 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch78f" event={"ID":"c317cdb1-d0ec-43c5-bd8c-49bef15233f3","Type":"ContainerStarted","Data":"7b1a73ec009cf8a41d2e9e97ee89980563811f05559796f0444236d9fbb01dc9"} Jan 30 17:20:10 crc kubenswrapper[4875]: I0130 17:20:10.024368 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:20:10 crc kubenswrapper[4875]: I0130 17:20:10.024518 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:20:10 crc kubenswrapper[4875]: I0130 17:20:10.082410 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:20:10 crc kubenswrapper[4875]: I0130 17:20:10.647964 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:20:11 crc kubenswrapper[4875]: I0130 17:20:11.251432 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdfmj"] Jan 30 17:20:11 crc kubenswrapper[4875]: I0130 17:20:11.589718 4875 generic.go:334] "Generic (PLEG): container finished" podID="c317cdb1-d0ec-43c5-bd8c-49bef15233f3" containerID="7b1a73ec009cf8a41d2e9e97ee89980563811f05559796f0444236d9fbb01dc9" exitCode=0 Jan 30 17:20:11 crc kubenswrapper[4875]: I0130 17:20:11.589801 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch78f" event={"ID":"c317cdb1-d0ec-43c5-bd8c-49bef15233f3","Type":"ContainerDied","Data":"7b1a73ec009cf8a41d2e9e97ee89980563811f05559796f0444236d9fbb01dc9"} Jan 30 17:20:12 crc kubenswrapper[4875]: I0130 17:20:12.600077 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jdfmj" podUID="76fe2519-334c-4a37-b267-a0aeb9005095" containerName="registry-server" containerID="cri-o://e71b7dbe2e422b78a7ea89a2fabee4730b32339ea1a2614ff09f5773dee46839" gracePeriod=2 Jan 30 17:20:12 crc kubenswrapper[4875]: I0130 17:20:12.600609 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch78f" event={"ID":"c317cdb1-d0ec-43c5-bd8c-49bef15233f3","Type":"ContainerStarted","Data":"affb97503c8c3177e3a24516319eedd02f09f91aafe818545f44ed9ae811c155"} Jan 30 17:20:12 crc kubenswrapper[4875]: I0130 17:20:12.627256 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ch78f" podStartSLOduration=2.9280055320000002 podStartE2EDuration="7.6272316s" podCreationTimestamp="2026-01-30 17:20:05 +0000 UTC" firstStartedPulling="2026-01-30 17:20:07.55410457 +0000 UTC m=+1418.101467953" lastFinishedPulling="2026-01-30 17:20:12.253330638 +0000 UTC m=+1422.800694021" observedRunningTime="2026-01-30 17:20:12.622031892 +0000 UTC m=+1423.169395285" watchObservedRunningTime="2026-01-30 17:20:12.6272316 +0000 UTC m=+1423.174594993" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.020807 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.154440 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76fe2519-334c-4a37-b267-a0aeb9005095-catalog-content\") pod \"76fe2519-334c-4a37-b267-a0aeb9005095\" (UID: \"76fe2519-334c-4a37-b267-a0aeb9005095\") " Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.154499 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qz9g\" (UniqueName: \"kubernetes.io/projected/76fe2519-334c-4a37-b267-a0aeb9005095-kube-api-access-4qz9g\") pod \"76fe2519-334c-4a37-b267-a0aeb9005095\" (UID: \"76fe2519-334c-4a37-b267-a0aeb9005095\") " Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.154561 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76fe2519-334c-4a37-b267-a0aeb9005095-utilities\") pod \"76fe2519-334c-4a37-b267-a0aeb9005095\" (UID: \"76fe2519-334c-4a37-b267-a0aeb9005095\") " Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.155195 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76fe2519-334c-4a37-b267-a0aeb9005095-utilities" (OuterVolumeSpecName: "utilities") pod "76fe2519-334c-4a37-b267-a0aeb9005095" (UID: "76fe2519-334c-4a37-b267-a0aeb9005095"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.159222 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76fe2519-334c-4a37-b267-a0aeb9005095-kube-api-access-4qz9g" (OuterVolumeSpecName: "kube-api-access-4qz9g") pod "76fe2519-334c-4a37-b267-a0aeb9005095" (UID: "76fe2519-334c-4a37-b267-a0aeb9005095"). InnerVolumeSpecName "kube-api-access-4qz9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.201107 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76fe2519-334c-4a37-b267-a0aeb9005095-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76fe2519-334c-4a37-b267-a0aeb9005095" (UID: "76fe2519-334c-4a37-b267-a0aeb9005095"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.257054 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76fe2519-334c-4a37-b267-a0aeb9005095-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.257090 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qz9g\" (UniqueName: \"kubernetes.io/projected/76fe2519-334c-4a37-b267-a0aeb9005095-kube-api-access-4qz9g\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.257105 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76fe2519-334c-4a37-b267-a0aeb9005095-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.614375 4875 generic.go:334] "Generic (PLEG): container finished" podID="76fe2519-334c-4a37-b267-a0aeb9005095" containerID="e71b7dbe2e422b78a7ea89a2fabee4730b32339ea1a2614ff09f5773dee46839" exitCode=0 Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.614403 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdfmj" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.614424 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdfmj" event={"ID":"76fe2519-334c-4a37-b267-a0aeb9005095","Type":"ContainerDied","Data":"e71b7dbe2e422b78a7ea89a2fabee4730b32339ea1a2614ff09f5773dee46839"} Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.616126 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdfmj" event={"ID":"76fe2519-334c-4a37-b267-a0aeb9005095","Type":"ContainerDied","Data":"5f9319bba44d5e076ba8258953ddf5605c0a335a9854db24ceae544cb9f26fe6"} Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.616151 4875 scope.go:117] "RemoveContainer" containerID="e71b7dbe2e422b78a7ea89a2fabee4730b32339ea1a2614ff09f5773dee46839" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.641291 4875 scope.go:117] "RemoveContainer" containerID="876b95c9bfe0c620565621a9a5a333240b443e37ca135121439ee6143c98321e" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.664318 4875 scope.go:117] "RemoveContainer" containerID="da91eed8c1b5bc342cb02df56bf93870aea8232cb8b584e9f175df0ed2604537" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.667105 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdfmj"] Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.694138 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jdfmj"] Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.707432 4875 scope.go:117] "RemoveContainer" containerID="e71b7dbe2e422b78a7ea89a2fabee4730b32339ea1a2614ff09f5773dee46839" Jan 30 17:20:13 crc kubenswrapper[4875]: E0130 17:20:13.708007 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e71b7dbe2e422b78a7ea89a2fabee4730b32339ea1a2614ff09f5773dee46839\": container with ID starting with e71b7dbe2e422b78a7ea89a2fabee4730b32339ea1a2614ff09f5773dee46839 not found: ID does not exist" containerID="e71b7dbe2e422b78a7ea89a2fabee4730b32339ea1a2614ff09f5773dee46839" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.708044 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e71b7dbe2e422b78a7ea89a2fabee4730b32339ea1a2614ff09f5773dee46839"} err="failed to get container status \"e71b7dbe2e422b78a7ea89a2fabee4730b32339ea1a2614ff09f5773dee46839\": rpc error: code = NotFound desc = could not find container \"e71b7dbe2e422b78a7ea89a2fabee4730b32339ea1a2614ff09f5773dee46839\": container with ID starting with e71b7dbe2e422b78a7ea89a2fabee4730b32339ea1a2614ff09f5773dee46839 not found: ID does not exist" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.708071 4875 scope.go:117] "RemoveContainer" containerID="876b95c9bfe0c620565621a9a5a333240b443e37ca135121439ee6143c98321e" Jan 30 17:20:13 crc kubenswrapper[4875]: E0130 17:20:13.708389 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"876b95c9bfe0c620565621a9a5a333240b443e37ca135121439ee6143c98321e\": container with ID starting with 876b95c9bfe0c620565621a9a5a333240b443e37ca135121439ee6143c98321e not found: ID does not exist" containerID="876b95c9bfe0c620565621a9a5a333240b443e37ca135121439ee6143c98321e" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.708420 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"876b95c9bfe0c620565621a9a5a333240b443e37ca135121439ee6143c98321e"} err="failed to get container status \"876b95c9bfe0c620565621a9a5a333240b443e37ca135121439ee6143c98321e\": rpc error: code = NotFound desc = could not find container \"876b95c9bfe0c620565621a9a5a333240b443e37ca135121439ee6143c98321e\": container with ID starting with 876b95c9bfe0c620565621a9a5a333240b443e37ca135121439ee6143c98321e not found: ID does not exist" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.708433 4875 scope.go:117] "RemoveContainer" containerID="da91eed8c1b5bc342cb02df56bf93870aea8232cb8b584e9f175df0ed2604537" Jan 30 17:20:13 crc kubenswrapper[4875]: E0130 17:20:13.708891 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da91eed8c1b5bc342cb02df56bf93870aea8232cb8b584e9f175df0ed2604537\": container with ID starting with da91eed8c1b5bc342cb02df56bf93870aea8232cb8b584e9f175df0ed2604537 not found: ID does not exist" containerID="da91eed8c1b5bc342cb02df56bf93870aea8232cb8b584e9f175df0ed2604537" Jan 30 17:20:13 crc kubenswrapper[4875]: I0130 17:20:13.708932 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da91eed8c1b5bc342cb02df56bf93870aea8232cb8b584e9f175df0ed2604537"} err="failed to get container status \"da91eed8c1b5bc342cb02df56bf93870aea8232cb8b584e9f175df0ed2604537\": rpc error: code = NotFound desc = could not find container \"da91eed8c1b5bc342cb02df56bf93870aea8232cb8b584e9f175df0ed2604537\": container with ID starting with da91eed8c1b5bc342cb02df56bf93870aea8232cb8b584e9f175df0ed2604537 not found: ID does not exist" Jan 30 17:20:14 crc kubenswrapper[4875]: I0130 17:20:14.153232 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76fe2519-334c-4a37-b267-a0aeb9005095" path="/var/lib/kubelet/pods/76fe2519-334c-4a37-b267-a0aeb9005095/volumes" Jan 30 17:20:15 crc kubenswrapper[4875]: I0130 17:20:15.592676 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:15 crc kubenswrapper[4875]: I0130 17:20:15.593743 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:16 crc kubenswrapper[4875]: I0130 17:20:16.634202 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ch78f" podUID="c317cdb1-d0ec-43c5-bd8c-49bef15233f3" containerName="registry-server" probeResult="failure" output=< Jan 30 17:20:16 crc kubenswrapper[4875]: timeout: failed to connect service ":50051" within 1s Jan 30 17:20:16 crc kubenswrapper[4875]: > Jan 30 17:20:25 crc kubenswrapper[4875]: I0130 17:20:25.673223 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:25 crc kubenswrapper[4875]: I0130 17:20:25.737565 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:25 crc kubenswrapper[4875]: I0130 17:20:25.919711 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ch78f"] Jan 30 17:20:26 crc kubenswrapper[4875]: I0130 17:20:26.287660 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:20:26 crc kubenswrapper[4875]: I0130 17:20:26.287755 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:20:26 crc kubenswrapper[4875]: I0130 17:20:26.727782 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ch78f" podUID="c317cdb1-d0ec-43c5-bd8c-49bef15233f3" containerName="registry-server" containerID="cri-o://affb97503c8c3177e3a24516319eedd02f09f91aafe818545f44ed9ae811c155" gracePeriod=2 Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.108684 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.206913 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4x6\" (UniqueName: \"kubernetes.io/projected/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-kube-api-access-4d4x6\") pod \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\" (UID: \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\") " Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.206996 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-utilities\") pod \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\" (UID: \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\") " Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.207073 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-catalog-content\") pod \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\" (UID: \"c317cdb1-d0ec-43c5-bd8c-49bef15233f3\") " Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.208967 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-utilities" (OuterVolumeSpecName: "utilities") pod "c317cdb1-d0ec-43c5-bd8c-49bef15233f3" (UID: "c317cdb1-d0ec-43c5-bd8c-49bef15233f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.214497 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-kube-api-access-4d4x6" (OuterVolumeSpecName: "kube-api-access-4d4x6") pod "c317cdb1-d0ec-43c5-bd8c-49bef15233f3" (UID: "c317cdb1-d0ec-43c5-bd8c-49bef15233f3"). InnerVolumeSpecName "kube-api-access-4d4x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.309056 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4x6\" (UniqueName: \"kubernetes.io/projected/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-kube-api-access-4d4x6\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.309088 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.322835 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c317cdb1-d0ec-43c5-bd8c-49bef15233f3" (UID: "c317cdb1-d0ec-43c5-bd8c-49bef15233f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.410457 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c317cdb1-d0ec-43c5-bd8c-49bef15233f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.744692 4875 generic.go:334] "Generic (PLEG): container finished" podID="c317cdb1-d0ec-43c5-bd8c-49bef15233f3" containerID="affb97503c8c3177e3a24516319eedd02f09f91aafe818545f44ed9ae811c155" exitCode=0 Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.744787 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ch78f" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.744811 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch78f" event={"ID":"c317cdb1-d0ec-43c5-bd8c-49bef15233f3","Type":"ContainerDied","Data":"affb97503c8c3177e3a24516319eedd02f09f91aafe818545f44ed9ae811c155"} Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.746086 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ch78f" event={"ID":"c317cdb1-d0ec-43c5-bd8c-49bef15233f3","Type":"ContainerDied","Data":"5a40109b80710504d474d0834bc45d2c0221005b1dfad3f70c3c21eacd76da86"} Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.746132 4875 scope.go:117] "RemoveContainer" containerID="affb97503c8c3177e3a24516319eedd02f09f91aafe818545f44ed9ae811c155" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.769405 4875 scope.go:117] "RemoveContainer" containerID="7b1a73ec009cf8a41d2e9e97ee89980563811f05559796f0444236d9fbb01dc9" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.831712 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ch78f"] Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.839382 4875 scope.go:117] "RemoveContainer" containerID="d15b21659eba410850e68c1e5fa184cbe74290ca5587b3495c17d1fae169e854" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.844744 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ch78f"] Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.872626 4875 scope.go:117] "RemoveContainer" containerID="affb97503c8c3177e3a24516319eedd02f09f91aafe818545f44ed9ae811c155" Jan 30 17:20:27 crc kubenswrapper[4875]: E0130 17:20:27.873379 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"affb97503c8c3177e3a24516319eedd02f09f91aafe818545f44ed9ae811c155\": container with ID starting with affb97503c8c3177e3a24516319eedd02f09f91aafe818545f44ed9ae811c155 not found: ID does not exist" containerID="affb97503c8c3177e3a24516319eedd02f09f91aafe818545f44ed9ae811c155" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.873416 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"affb97503c8c3177e3a24516319eedd02f09f91aafe818545f44ed9ae811c155"} err="failed to get container status \"affb97503c8c3177e3a24516319eedd02f09f91aafe818545f44ed9ae811c155\": rpc error: code = NotFound desc = could not find container \"affb97503c8c3177e3a24516319eedd02f09f91aafe818545f44ed9ae811c155\": container with ID starting with affb97503c8c3177e3a24516319eedd02f09f91aafe818545f44ed9ae811c155 not found: ID does not exist" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.873443 4875 scope.go:117] "RemoveContainer" containerID="7b1a73ec009cf8a41d2e9e97ee89980563811f05559796f0444236d9fbb01dc9" Jan 30 17:20:27 crc kubenswrapper[4875]: E0130 17:20:27.873930 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b1a73ec009cf8a41d2e9e97ee89980563811f05559796f0444236d9fbb01dc9\": container with ID starting with 7b1a73ec009cf8a41d2e9e97ee89980563811f05559796f0444236d9fbb01dc9 not found: ID does not exist" containerID="7b1a73ec009cf8a41d2e9e97ee89980563811f05559796f0444236d9fbb01dc9" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.873958 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1a73ec009cf8a41d2e9e97ee89980563811f05559796f0444236d9fbb01dc9"} err="failed to get container status \"7b1a73ec009cf8a41d2e9e97ee89980563811f05559796f0444236d9fbb01dc9\": rpc error: code = NotFound desc = could not find container \"7b1a73ec009cf8a41d2e9e97ee89980563811f05559796f0444236d9fbb01dc9\": container with ID starting with 7b1a73ec009cf8a41d2e9e97ee89980563811f05559796f0444236d9fbb01dc9 not found: ID does not exist" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.873976 4875 scope.go:117] "RemoveContainer" containerID="d15b21659eba410850e68c1e5fa184cbe74290ca5587b3495c17d1fae169e854" Jan 30 17:20:27 crc kubenswrapper[4875]: E0130 17:20:27.874282 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d15b21659eba410850e68c1e5fa184cbe74290ca5587b3495c17d1fae169e854\": container with ID starting with d15b21659eba410850e68c1e5fa184cbe74290ca5587b3495c17d1fae169e854 not found: ID does not exist" containerID="d15b21659eba410850e68c1e5fa184cbe74290ca5587b3495c17d1fae169e854" Jan 30 17:20:27 crc kubenswrapper[4875]: I0130 17:20:27.874315 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d15b21659eba410850e68c1e5fa184cbe74290ca5587b3495c17d1fae169e854"} err="failed to get container status \"d15b21659eba410850e68c1e5fa184cbe74290ca5587b3495c17d1fae169e854\": rpc error: code = NotFound desc = could not find container \"d15b21659eba410850e68c1e5fa184cbe74290ca5587b3495c17d1fae169e854\": container with ID starting with d15b21659eba410850e68c1e5fa184cbe74290ca5587b3495c17d1fae169e854 not found: ID does not exist" Jan 30 17:20:28 crc kubenswrapper[4875]: I0130 17:20:28.147929 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c317cdb1-d0ec-43c5-bd8c-49bef15233f3" path="/var/lib/kubelet/pods/c317cdb1-d0ec-43c5-bd8c-49bef15233f3/volumes" Jan 30 17:20:31 crc kubenswrapper[4875]: I0130 17:20:31.199956 4875 scope.go:117] "RemoveContainer" containerID="2140764bd6054cb6399ae69a139a9ef2a5dd0b2a740d55547c6e03c90f224931" Jan 30 17:20:56 crc kubenswrapper[4875]: I0130 17:20:56.287640 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:20:56 crc kubenswrapper[4875]: I0130 17:20:56.290016 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:21:26 crc kubenswrapper[4875]: I0130 17:21:26.288205 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:21:26 crc kubenswrapper[4875]: I0130 17:21:26.289247 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:21:26 crc kubenswrapper[4875]: I0130 17:21:26.289299 4875 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 17:21:26 crc kubenswrapper[4875]: I0130 17:21:26.290443 4875 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78"} pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:21:26 crc kubenswrapper[4875]: I0130 17:21:26.290502 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" containerID="cri-o://229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" gracePeriod=600 Jan 30 17:21:26 crc kubenswrapper[4875]: E0130 17:21:26.420915 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:21:27 crc kubenswrapper[4875]: I0130 17:21:27.293480 4875 generic.go:334] "Generic (PLEG): container finished" podID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" exitCode=0 Jan 30 17:21:27 crc kubenswrapper[4875]: I0130 17:21:27.293529 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerDied","Data":"229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78"} Jan 30 17:21:27 crc kubenswrapper[4875]: I0130 17:21:27.293563 4875 scope.go:117] "RemoveContainer" containerID="48e3a087955728186281898d070efcfe8a3f5df09e6720b6da52c18157fc11ce" Jan 30 17:21:27 crc kubenswrapper[4875]: I0130 17:21:27.294704 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:21:27 crc kubenswrapper[4875]: E0130 17:21:27.295132 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:21:31 crc kubenswrapper[4875]: I0130 17:21:31.262700 4875 scope.go:117] "RemoveContainer" containerID="94167c01229aeec8cf619d60e4aacab5d84e8f547b093595aaccda02f1d69fd0" Jan 30 17:21:31 crc kubenswrapper[4875]: I0130 17:21:31.303729 4875 scope.go:117] "RemoveContainer" containerID="2ab78dd05c9c2b5ed5d5660300596887791c39e1464e42050bc08d8db0d931ad" Jan 30 17:21:39 crc kubenswrapper[4875]: I0130 17:21:39.136560 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:21:39 crc kubenswrapper[4875]: E0130 17:21:39.137299 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:21:51 crc kubenswrapper[4875]: I0130 17:21:51.065456 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-create-sljmk"] Jan 30 17:21:51 crc kubenswrapper[4875]: I0130 17:21:51.078926 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-create-57627"] Jan 30 17:21:51 crc kubenswrapper[4875]: I0130 17:21:51.091216 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-ec5d-account-create-update-m7bv7"] Jan 30 17:21:51 crc kubenswrapper[4875]: I0130 17:21:51.098862 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-create-sljmk"] Jan 30 17:21:51 crc kubenswrapper[4875]: I0130 17:21:51.106576 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-create-57627"] Jan 30 17:21:51 crc kubenswrapper[4875]: I0130 17:21:51.113726 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-ec5d-account-create-update-m7bv7"] Jan 30 17:21:51 crc kubenswrapper[4875]: I0130 17:21:51.136258 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:21:51 crc kubenswrapper[4875]: E0130 17:21:51.136659 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:21:52 crc kubenswrapper[4875]: I0130 17:21:52.029775 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-3844-account-create-update-hgdr6"] Jan 30 17:21:52 crc kubenswrapper[4875]: I0130 17:21:52.035898 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-3844-account-create-update-hgdr6"] Jan 30 17:21:52 crc kubenswrapper[4875]: I0130 17:21:52.146567 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56e2e23d-0edb-4d09-b421-5bb12f185bdd" path="/var/lib/kubelet/pods/56e2e23d-0edb-4d09-b421-5bb12f185bdd/volumes" Jan 30 17:21:52 crc kubenswrapper[4875]: I0130 17:21:52.147416 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c19db90e-7888-492f-81aa-3109c80be25b" path="/var/lib/kubelet/pods/c19db90e-7888-492f-81aa-3109c80be25b/volumes" Jan 30 17:21:52 crc kubenswrapper[4875]: I0130 17:21:52.148074 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cacfe404-1454-466e-8036-b66d7b76ea37" path="/var/lib/kubelet/pods/cacfe404-1454-466e-8036-b66d7b76ea37/volumes" Jan 30 17:21:52 crc kubenswrapper[4875]: I0130 17:21:52.148737 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccb0c94f-c080-475d-b3e3-5c48b99f7c1f" path="/var/lib/kubelet/pods/ccb0c94f-c080-475d-b3e3-5c48b99f7c1f/volumes" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.489573 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jgw9d"] Jan 30 17:22:00 crc kubenswrapper[4875]: E0130 17:22:00.490624 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fe2519-334c-4a37-b267-a0aeb9005095" containerName="extract-utilities" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.490647 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fe2519-334c-4a37-b267-a0aeb9005095" containerName="extract-utilities" Jan 30 17:22:00 crc kubenswrapper[4875]: E0130 17:22:00.490664 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c317cdb1-d0ec-43c5-bd8c-49bef15233f3" containerName="extract-utilities" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.490676 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="c317cdb1-d0ec-43c5-bd8c-49bef15233f3" containerName="extract-utilities" Jan 30 17:22:00 crc kubenswrapper[4875]: E0130 17:22:00.490689 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c317cdb1-d0ec-43c5-bd8c-49bef15233f3" containerName="extract-content" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.490700 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="c317cdb1-d0ec-43c5-bd8c-49bef15233f3" containerName="extract-content" Jan 30 17:22:00 crc kubenswrapper[4875]: E0130 17:22:00.490728 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c317cdb1-d0ec-43c5-bd8c-49bef15233f3" containerName="registry-server" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.490739 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="c317cdb1-d0ec-43c5-bd8c-49bef15233f3" containerName="registry-server" Jan 30 17:22:00 crc kubenswrapper[4875]: E0130 17:22:00.490765 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fe2519-334c-4a37-b267-a0aeb9005095" containerName="registry-server" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.490776 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fe2519-334c-4a37-b267-a0aeb9005095" containerName="registry-server" Jan 30 17:22:00 crc kubenswrapper[4875]: E0130 17:22:00.490801 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fe2519-334c-4a37-b267-a0aeb9005095" containerName="extract-content" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.490812 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fe2519-334c-4a37-b267-a0aeb9005095" containerName="extract-content" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.491033 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="c317cdb1-d0ec-43c5-bd8c-49bef15233f3" containerName="registry-server" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.491054 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fe2519-334c-4a37-b267-a0aeb9005095" containerName="registry-server" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.496878 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.511269 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgw9d"] Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.590977 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2247927f-781b-4017-87f0-90143313e690-catalog-content\") pod \"certified-operators-jgw9d\" (UID: \"2247927f-781b-4017-87f0-90143313e690\") " pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.591017 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4z6c\" (UniqueName: \"kubernetes.io/projected/2247927f-781b-4017-87f0-90143313e690-kube-api-access-v4z6c\") pod \"certified-operators-jgw9d\" (UID: \"2247927f-781b-4017-87f0-90143313e690\") " pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.591183 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2247927f-781b-4017-87f0-90143313e690-utilities\") pod \"certified-operators-jgw9d\" (UID: \"2247927f-781b-4017-87f0-90143313e690\") " pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.692635 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2247927f-781b-4017-87f0-90143313e690-utilities\") pod \"certified-operators-jgw9d\" (UID: \"2247927f-781b-4017-87f0-90143313e690\") " pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.692761 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2247927f-781b-4017-87f0-90143313e690-catalog-content\") pod \"certified-operators-jgw9d\" (UID: \"2247927f-781b-4017-87f0-90143313e690\") " pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.692785 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4z6c\" (UniqueName: \"kubernetes.io/projected/2247927f-781b-4017-87f0-90143313e690-kube-api-access-v4z6c\") pod \"certified-operators-jgw9d\" (UID: \"2247927f-781b-4017-87f0-90143313e690\") " pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.693436 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2247927f-781b-4017-87f0-90143313e690-utilities\") pod \"certified-operators-jgw9d\" (UID: \"2247927f-781b-4017-87f0-90143313e690\") " pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.693460 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2247927f-781b-4017-87f0-90143313e690-catalog-content\") pod \"certified-operators-jgw9d\" (UID: \"2247927f-781b-4017-87f0-90143313e690\") " pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.715426 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4z6c\" (UniqueName: \"kubernetes.io/projected/2247927f-781b-4017-87f0-90143313e690-kube-api-access-v4z6c\") pod \"certified-operators-jgw9d\" (UID: \"2247927f-781b-4017-87f0-90143313e690\") " pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:00 crc kubenswrapper[4875]: I0130 17:22:00.830718 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:01 crc kubenswrapper[4875]: I0130 17:22:01.316270 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgw9d"] Jan 30 17:22:01 crc kubenswrapper[4875]: I0130 17:22:01.591320 4875 generic.go:334] "Generic (PLEG): container finished" podID="2247927f-781b-4017-87f0-90143313e690" containerID="562e18faaf03857d9bd7b98e5168d993750a1da6956d395d0d0f7f5bb0afaeb3" exitCode=0 Jan 30 17:22:01 crc kubenswrapper[4875]: I0130 17:22:01.591374 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgw9d" event={"ID":"2247927f-781b-4017-87f0-90143313e690","Type":"ContainerDied","Data":"562e18faaf03857d9bd7b98e5168d993750a1da6956d395d0d0f7f5bb0afaeb3"} Jan 30 17:22:01 crc kubenswrapper[4875]: I0130 17:22:01.591411 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgw9d" event={"ID":"2247927f-781b-4017-87f0-90143313e690","Type":"ContainerStarted","Data":"a8d49a8752ee8d5bcc66c6d66173a4b615a64bc631d4190cd47de5536e051ada"} Jan 30 17:22:01 crc kubenswrapper[4875]: I0130 17:22:01.594261 4875 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:22:02 crc kubenswrapper[4875]: I0130 17:22:02.030021 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-d79kf"] Jan 30 17:22:02 crc kubenswrapper[4875]: I0130 17:22:02.036528 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-d79kf"] Jan 30 17:22:02 crc kubenswrapper[4875]: I0130 17:22:02.136349 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:22:02 crc kubenswrapper[4875]: E0130 17:22:02.137109 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:22:02 crc kubenswrapper[4875]: I0130 17:22:02.155919 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a569857-e743-4fec-8bc5-63bdec8c8b0c" path="/var/lib/kubelet/pods/9a569857-e743-4fec-8bc5-63bdec8c8b0c/volumes" Jan 30 17:22:05 crc kubenswrapper[4875]: I0130 17:22:05.625771 4875 generic.go:334] "Generic (PLEG): container finished" podID="2247927f-781b-4017-87f0-90143313e690" containerID="d0ab8fda38d75abe9530fa753acff3b2547e0c3151a25387382e6b43e53ac887" exitCode=0 Jan 30 17:22:05 crc kubenswrapper[4875]: I0130 17:22:05.626305 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgw9d" event={"ID":"2247927f-781b-4017-87f0-90143313e690","Type":"ContainerDied","Data":"d0ab8fda38d75abe9530fa753acff3b2547e0c3151a25387382e6b43e53ac887"} Jan 30 17:22:06 crc kubenswrapper[4875]: I0130 17:22:06.637260 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgw9d" event={"ID":"2247927f-781b-4017-87f0-90143313e690","Type":"ContainerStarted","Data":"2d7ed2278a0affd3a07751578992b5a2809cc6be543ae36caf7d6b17030264d2"} Jan 30 17:22:06 crc kubenswrapper[4875]: I0130 17:22:06.662632 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jgw9d" podStartSLOduration=2.254650743 podStartE2EDuration="6.66258713s" podCreationTimestamp="2026-01-30 17:22:00 +0000 UTC" firstStartedPulling="2026-01-30 17:22:01.59382258 +0000 UTC m=+1532.141185993" lastFinishedPulling="2026-01-30 17:22:06.001758997 +0000 UTC m=+1536.549122380" observedRunningTime="2026-01-30 17:22:06.654472908 +0000 UTC m=+1537.201836301" watchObservedRunningTime="2026-01-30 17:22:06.66258713 +0000 UTC m=+1537.209973324" Jan 30 17:22:10 crc kubenswrapper[4875]: I0130 17:22:10.830957 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:10 crc kubenswrapper[4875]: I0130 17:22:10.831540 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:10 crc kubenswrapper[4875]: I0130 17:22:10.876271 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:11 crc kubenswrapper[4875]: I0130 17:22:11.721993 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jgw9d" Jan 30 17:22:11 crc kubenswrapper[4875]: I0130 17:22:11.797775 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgw9d"] Jan 30 17:22:11 crc kubenswrapper[4875]: I0130 17:22:11.848540 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9gm2r"] Jan 30 17:22:11 crc kubenswrapper[4875]: I0130 17:22:11.848937 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9gm2r" podUID="19625989-de41-4994-b07f-6d0880ba073c" containerName="registry-server" containerID="cri-o://c9c675481037d4c6108a084ba81e8feef9beca19ef426dee1ff8b8a74aa8b7d1" gracePeriod=2 Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.286888 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.482297 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19625989-de41-4994-b07f-6d0880ba073c-utilities\") pod \"19625989-de41-4994-b07f-6d0880ba073c\" (UID: \"19625989-de41-4994-b07f-6d0880ba073c\") " Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.482337 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19625989-de41-4994-b07f-6d0880ba073c-catalog-content\") pod \"19625989-de41-4994-b07f-6d0880ba073c\" (UID: \"19625989-de41-4994-b07f-6d0880ba073c\") " Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.482414 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvppr\" (UniqueName: \"kubernetes.io/projected/19625989-de41-4994-b07f-6d0880ba073c-kube-api-access-wvppr\") pod \"19625989-de41-4994-b07f-6d0880ba073c\" (UID: \"19625989-de41-4994-b07f-6d0880ba073c\") " Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.484071 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19625989-de41-4994-b07f-6d0880ba073c-utilities" (OuterVolumeSpecName: "utilities") pod "19625989-de41-4994-b07f-6d0880ba073c" (UID: "19625989-de41-4994-b07f-6d0880ba073c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.489615 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19625989-de41-4994-b07f-6d0880ba073c-kube-api-access-wvppr" (OuterVolumeSpecName: "kube-api-access-wvppr") pod "19625989-de41-4994-b07f-6d0880ba073c" (UID: "19625989-de41-4994-b07f-6d0880ba073c"). InnerVolumeSpecName "kube-api-access-wvppr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.526518 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19625989-de41-4994-b07f-6d0880ba073c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19625989-de41-4994-b07f-6d0880ba073c" (UID: "19625989-de41-4994-b07f-6d0880ba073c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.584404 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19625989-de41-4994-b07f-6d0880ba073c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.584684 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19625989-de41-4994-b07f-6d0880ba073c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.584755 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvppr\" (UniqueName: \"kubernetes.io/projected/19625989-de41-4994-b07f-6d0880ba073c-kube-api-access-wvppr\") on node \"crc\" DevicePath \"\"" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.691291 4875 generic.go:334] "Generic (PLEG): container finished" podID="19625989-de41-4994-b07f-6d0880ba073c" containerID="c9c675481037d4c6108a084ba81e8feef9beca19ef426dee1ff8b8a74aa8b7d1" exitCode=0 Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.691346 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gm2r" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.691385 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gm2r" event={"ID":"19625989-de41-4994-b07f-6d0880ba073c","Type":"ContainerDied","Data":"c9c675481037d4c6108a084ba81e8feef9beca19ef426dee1ff8b8a74aa8b7d1"} Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.691434 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gm2r" event={"ID":"19625989-de41-4994-b07f-6d0880ba073c","Type":"ContainerDied","Data":"ff9a8cf15ff143200ad5e44255e7e814bfa51242a939dade36ad1865b2af2057"} Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.691455 4875 scope.go:117] "RemoveContainer" containerID="c9c675481037d4c6108a084ba81e8feef9beca19ef426dee1ff8b8a74aa8b7d1" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.710490 4875 scope.go:117] "RemoveContainer" containerID="775524c2e3772f7304e280518e3da374e4b0466cb54ea08bd64d74338a19277e" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.725745 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9gm2r"] Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.733131 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9gm2r"] Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.737535 4875 scope.go:117] "RemoveContainer" containerID="78f427ed407f8acbdfb71f0beab73d867595b2891be18d41f3db899209b23ab5" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.766983 4875 scope.go:117] "RemoveContainer" containerID="c9c675481037d4c6108a084ba81e8feef9beca19ef426dee1ff8b8a74aa8b7d1" Jan 30 17:22:12 crc kubenswrapper[4875]: E0130 17:22:12.767688 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9c675481037d4c6108a084ba81e8feef9beca19ef426dee1ff8b8a74aa8b7d1\": container with ID starting with c9c675481037d4c6108a084ba81e8feef9beca19ef426dee1ff8b8a74aa8b7d1 not found: ID does not exist" containerID="c9c675481037d4c6108a084ba81e8feef9beca19ef426dee1ff8b8a74aa8b7d1" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.767936 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9c675481037d4c6108a084ba81e8feef9beca19ef426dee1ff8b8a74aa8b7d1"} err="failed to get container status \"c9c675481037d4c6108a084ba81e8feef9beca19ef426dee1ff8b8a74aa8b7d1\": rpc error: code = NotFound desc = could not find container \"c9c675481037d4c6108a084ba81e8feef9beca19ef426dee1ff8b8a74aa8b7d1\": container with ID starting with c9c675481037d4c6108a084ba81e8feef9beca19ef426dee1ff8b8a74aa8b7d1 not found: ID does not exist" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.767965 4875 scope.go:117] "RemoveContainer" containerID="775524c2e3772f7304e280518e3da374e4b0466cb54ea08bd64d74338a19277e" Jan 30 17:22:12 crc kubenswrapper[4875]: E0130 17:22:12.768346 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"775524c2e3772f7304e280518e3da374e4b0466cb54ea08bd64d74338a19277e\": container with ID starting with 775524c2e3772f7304e280518e3da374e4b0466cb54ea08bd64d74338a19277e not found: ID does not exist" containerID="775524c2e3772f7304e280518e3da374e4b0466cb54ea08bd64d74338a19277e" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.768383 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"775524c2e3772f7304e280518e3da374e4b0466cb54ea08bd64d74338a19277e"} err="failed to get container status \"775524c2e3772f7304e280518e3da374e4b0466cb54ea08bd64d74338a19277e\": rpc error: code = NotFound desc = could not find container \"775524c2e3772f7304e280518e3da374e4b0466cb54ea08bd64d74338a19277e\": container with ID starting with 775524c2e3772f7304e280518e3da374e4b0466cb54ea08bd64d74338a19277e not found: ID does not exist" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.768405 4875 scope.go:117] "RemoveContainer" containerID="78f427ed407f8acbdfb71f0beab73d867595b2891be18d41f3db899209b23ab5" Jan 30 17:22:12 crc kubenswrapper[4875]: E0130 17:22:12.768708 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78f427ed407f8acbdfb71f0beab73d867595b2891be18d41f3db899209b23ab5\": container with ID starting with 78f427ed407f8acbdfb71f0beab73d867595b2891be18d41f3db899209b23ab5 not found: ID does not exist" containerID="78f427ed407f8acbdfb71f0beab73d867595b2891be18d41f3db899209b23ab5" Jan 30 17:22:12 crc kubenswrapper[4875]: I0130 17:22:12.768905 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78f427ed407f8acbdfb71f0beab73d867595b2891be18d41f3db899209b23ab5"} err="failed to get container status \"78f427ed407f8acbdfb71f0beab73d867595b2891be18d41f3db899209b23ab5\": rpc error: code = NotFound desc = could not find container \"78f427ed407f8acbdfb71f0beab73d867595b2891be18d41f3db899209b23ab5\": container with ID starting with 78f427ed407f8acbdfb71f0beab73d867595b2891be18d41f3db899209b23ab5 not found: ID does not exist" Jan 30 17:22:14 crc kubenswrapper[4875]: I0130 17:22:14.146288 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19625989-de41-4994-b07f-6d0880ba073c" path="/var/lib/kubelet/pods/19625989-de41-4994-b07f-6d0880ba073c/volumes" Jan 30 17:22:15 crc kubenswrapper[4875]: I0130 17:22:15.136469 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:22:15 crc kubenswrapper[4875]: E0130 17:22:15.137384 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:22:26 crc kubenswrapper[4875]: I0130 17:22:26.138053 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:22:26 crc kubenswrapper[4875]: E0130 17:22:26.139125 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:22:31 crc kubenswrapper[4875]: I0130 17:22:31.391745 4875 scope.go:117] "RemoveContainer" containerID="cfb3403bee90a75d4b11236e55da52f19abc361193aff4f5c329e9c54dca4e13" Jan 30 17:22:31 crc kubenswrapper[4875]: I0130 17:22:31.434511 4875 scope.go:117] "RemoveContainer" containerID="0022c23ed2e2145d08ef28bf4670ca3497acdab4af3dff7c0d3899d5847337ad" Jan 30 17:22:31 crc kubenswrapper[4875]: I0130 17:22:31.494136 4875 scope.go:117] "RemoveContainer" containerID="7ccaeca0120987ce77a158bccc5c4d82c8df6516bc785d75df115c81e3a67fa6" Jan 30 17:22:31 crc kubenswrapper[4875]: I0130 17:22:31.533392 4875 scope.go:117] "RemoveContainer" containerID="6612ce0639e40a40e560924e4d907833dffd286927eddf189ee1f195411aef45" Jan 30 17:22:31 crc kubenswrapper[4875]: I0130 17:22:31.555908 4875 scope.go:117] "RemoveContainer" containerID="f42eb5c44f3af398b19cebfcaa54889d1c35331fafe5e8419b0a5ace7c57a44e" Jan 30 17:22:31 crc kubenswrapper[4875]: I0130 17:22:31.598518 4875 scope.go:117] "RemoveContainer" containerID="010a44139337623bf89ddd7ce1765ba779ece6332e73e2b5def9f6ad4ad53fe7" Jan 30 17:22:32 crc kubenswrapper[4875]: I0130 17:22:32.042008 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-sync-xcxvd"] Jan 30 17:22:32 crc kubenswrapper[4875]: I0130 17:22:32.047629 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-sync-xcxvd"] Jan 30 17:22:32 crc kubenswrapper[4875]: I0130 17:22:32.146896 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2064320-5eaf-4bef-af21-eb2812fcbd4a" path="/var/lib/kubelet/pods/b2064320-5eaf-4bef-af21-eb2812fcbd4a/volumes" Jan 30 17:22:37 crc kubenswrapper[4875]: I0130 17:22:37.137193 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:22:37 crc kubenswrapper[4875]: E0130 17:22:37.138191 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:22:41 crc kubenswrapper[4875]: I0130 17:22:41.056422 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-sync-jc9wl"] Jan 30 17:22:41 crc kubenswrapper[4875]: I0130 17:22:41.064800 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-sync-jc9wl"] Jan 30 17:22:42 crc kubenswrapper[4875]: I0130 17:22:42.149019 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a08451c-5704-47a5-ae37-83a7f01bc502" path="/var/lib/kubelet/pods/2a08451c-5704-47a5-ae37-83a7f01bc502/volumes" Jan 30 17:22:48 crc kubenswrapper[4875]: I0130 17:22:48.027714 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gpjtc"] Jan 30 17:22:48 crc kubenswrapper[4875]: I0130 17:22:48.034336 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-gpjtc"] Jan 30 17:22:48 crc kubenswrapper[4875]: I0130 17:22:48.145240 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b333d616-9e20-4fcf-8c85-f3c90a6bee75" path="/var/lib/kubelet/pods/b333d616-9e20-4fcf-8c85-f3c90a6bee75/volumes" Jan 30 17:22:51 crc kubenswrapper[4875]: I0130 17:22:51.136343 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:22:51 crc kubenswrapper[4875]: E0130 17:22:51.136669 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:23:05 crc kubenswrapper[4875]: I0130 17:23:05.136010 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:23:05 crc kubenswrapper[4875]: E0130 17:23:05.136668 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:23:16 crc kubenswrapper[4875]: I0130 17:23:16.137400 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:23:16 crc kubenswrapper[4875]: E0130 17:23:16.138895 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:23:28 crc kubenswrapper[4875]: I0130 17:23:28.136495 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:23:28 crc kubenswrapper[4875]: E0130 17:23:28.137259 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:23:31 crc kubenswrapper[4875]: I0130 17:23:31.723735 4875 scope.go:117] "RemoveContainer" containerID="3fa5d8c7f70a96347025ba4932a4eb1ab36d6f79b84230dde8c14dfee264212d" Jan 30 17:23:31 crc kubenswrapper[4875]: I0130 17:23:31.766193 4875 scope.go:117] "RemoveContainer" containerID="2828ba94a8a28e3012090b9be4de56dd7a71fcb4ff38037b8226c0244fd4d980" Jan 30 17:23:31 crc kubenswrapper[4875]: I0130 17:23:31.833891 4875 scope.go:117] "RemoveContainer" containerID="65d11b87214bbcdb19a11096312e897c3c2e56a97a48d9b483c34612eb719162" Jan 30 17:23:31 crc kubenswrapper[4875]: I0130 17:23:31.884535 4875 scope.go:117] "RemoveContainer" containerID="007475f3dbfa71584438843858509f0e6fbd8d04a4fecbca9d37b3b82b4eca40" Jan 30 17:23:39 crc kubenswrapper[4875]: I0130 17:23:39.136020 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:23:39 crc kubenswrapper[4875]: E0130 17:23:39.136644 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:23:52 crc kubenswrapper[4875]: I0130 17:23:52.135804 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:23:52 crc kubenswrapper[4875]: E0130 17:23:52.136463 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:24:05 crc kubenswrapper[4875]: I0130 17:24:05.136512 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:24:05 crc kubenswrapper[4875]: E0130 17:24:05.137499 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:24:17 crc kubenswrapper[4875]: I0130 17:24:17.136139 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:24:17 crc kubenswrapper[4875]: E0130 17:24:17.137298 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:24:32 crc kubenswrapper[4875]: I0130 17:24:32.135850 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:24:32 crc kubenswrapper[4875]: E0130 17:24:32.136562 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:24:45 crc kubenswrapper[4875]: I0130 17:24:45.658857 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:24:45 crc kubenswrapper[4875]: E0130 17:24:45.665822 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:25:01 crc kubenswrapper[4875]: I0130 17:25:01.136695 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:25:01 crc kubenswrapper[4875]: E0130 17:25:01.137520 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:25:14 crc kubenswrapper[4875]: I0130 17:25:14.136089 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:25:14 crc kubenswrapper[4875]: E0130 17:25:14.136849 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:25:25 crc kubenswrapper[4875]: I0130 17:25:25.135494 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:25:25 crc kubenswrapper[4875]: E0130 17:25:25.136250 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:25:27 crc kubenswrapper[4875]: I0130 17:25:27.040914 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr"] Jan 30 17:25:27 crc kubenswrapper[4875]: I0130 17:25:27.047613 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-z4tpp"] Jan 30 17:25:27 crc kubenswrapper[4875]: I0130 17:25:27.055854 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k"] Jan 30 17:25:27 crc kubenswrapper[4875]: I0130 17:25:27.063337 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-bf0b-account-create-update-p9bpr"] Jan 30 17:25:27 crc kubenswrapper[4875]: I0130 17:25:27.069488 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-dd3c-account-create-update-fpg7k"] Jan 30 17:25:27 crc kubenswrapper[4875]: I0130 17:25:27.074731 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-jkxb9"] Jan 30 17:25:27 crc kubenswrapper[4875]: I0130 17:25:27.079801 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-z4tpp"] Jan 30 17:25:27 crc kubenswrapper[4875]: I0130 17:25:27.085631 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-jkxb9"] Jan 30 17:25:28 crc kubenswrapper[4875]: I0130 17:25:28.145167 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346898dc-db0f-4f45-aa32-d4234d759042" path="/var/lib/kubelet/pods/346898dc-db0f-4f45-aa32-d4234d759042/volumes" Jan 30 17:25:28 crc kubenswrapper[4875]: I0130 17:25:28.146012 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f3d7a5e-cb17-44f8-9898-c41e0cff56bf" path="/var/lib/kubelet/pods/5f3d7a5e-cb17-44f8-9898-c41e0cff56bf/volumes" Jan 30 17:25:28 crc kubenswrapper[4875]: I0130 17:25:28.146616 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e1d2c4-624d-42d8-93fc-d203ec6a9c0f" path="/var/lib/kubelet/pods/84e1d2c4-624d-42d8-93fc-d203ec6a9c0f/volumes" Jan 30 17:25:28 crc kubenswrapper[4875]: I0130 17:25:28.147104 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb305e99-aa29-41e8-97de-f49f2fdd8e7b" path="/var/lib/kubelet/pods/bb305e99-aa29-41e8-97de-f49f2fdd8e7b/volumes" Jan 30 17:25:31 crc kubenswrapper[4875]: I0130 17:25:31.987933 4875 scope.go:117] "RemoveContainer" containerID="06e4849a25106592fdd88b8f37251f2cd1240f332fe06fb5eee071de7b904aea" Jan 30 17:25:32 crc kubenswrapper[4875]: I0130 17:25:32.019375 4875 scope.go:117] "RemoveContainer" containerID="18e6d6fcc136cd1be771cc4f120dd5e70dcc57b0a3cdab20e3d1dc635d89f80f" Jan 30 17:25:32 crc kubenswrapper[4875]: I0130 17:25:32.052553 4875 scope.go:117] "RemoveContainer" containerID="94eff01f095b89372f2d9f2896f0d252cbc05ca211c197972fd090c2b6bae45c" Jan 30 17:25:32 crc kubenswrapper[4875]: I0130 17:25:32.096045 4875 scope.go:117] "RemoveContainer" containerID="fa165cdef2cb82c68a99afbe4896b77d0c32fde2b5b72a6252d631b0c9c1cd70" Jan 30 17:25:40 crc kubenswrapper[4875]: I0130 17:25:40.140481 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:25:40 crc kubenswrapper[4875]: E0130 17:25:40.141279 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:25:48 crc kubenswrapper[4875]: I0130 17:25:48.056027 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj"] Jan 30 17:25:48 crc kubenswrapper[4875]: I0130 17:25:48.066564 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hjfhj"] Jan 30 17:25:48 crc kubenswrapper[4875]: I0130 17:25:48.149055 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6aa5cab-8934-4528-ab4b-0e2e08cb67b0" path="/var/lib/kubelet/pods/f6aa5cab-8934-4528-ab4b-0e2e08cb67b0/volumes" Jan 30 17:25:51 crc kubenswrapper[4875]: I0130 17:25:51.135652 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:25:51 crc kubenswrapper[4875]: E0130 17:25:51.136699 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:26:05 crc kubenswrapper[4875]: I0130 17:26:05.136934 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:26:05 crc kubenswrapper[4875]: E0130 17:26:05.137865 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:26:06 crc kubenswrapper[4875]: I0130 17:26:06.035945 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd"] Jan 30 17:26:06 crc kubenswrapper[4875]: I0130 17:26:06.047383 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-rkspd"] Jan 30 17:26:06 crc kubenswrapper[4875]: I0130 17:26:06.146334 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d5408a-71a2-48dd-bc00-17f3aa048238" path="/var/lib/kubelet/pods/91d5408a-71a2-48dd-bc00-17f3aa048238/volumes" Jan 30 17:26:18 crc kubenswrapper[4875]: I0130 17:26:18.136072 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:26:18 crc kubenswrapper[4875]: E0130 17:26:18.137094 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:26:25 crc kubenswrapper[4875]: I0130 17:26:25.032607 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686"] Jan 30 17:26:25 crc kubenswrapper[4875]: I0130 17:26:25.038400 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-jm686"] Jan 30 17:26:26 crc kubenswrapper[4875]: I0130 17:26:26.148051 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4068c0c1-1588-401e-b1c2-597bfb06913a" path="/var/lib/kubelet/pods/4068c0c1-1588-401e-b1c2-597bfb06913a/volumes" Jan 30 17:26:31 crc kubenswrapper[4875]: I0130 17:26:31.135892 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:26:31 crc kubenswrapper[4875]: I0130 17:26:31.486775 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerStarted","Data":"8b766e41a157db7a703015b0504adf1f01b15a6ef061e2f64f148c69531ba279"} Jan 30 17:26:32 crc kubenswrapper[4875]: I0130 17:26:32.187970 4875 scope.go:117] "RemoveContainer" containerID="1ee145f8190b14013fee6b7110901c003ecc5b37c2438d9ccb09e3440982d394" Jan 30 17:26:32 crc kubenswrapper[4875]: I0130 17:26:32.237841 4875 scope.go:117] "RemoveContainer" containerID="f9d9d03c31bfca34e1b4bec070d4a6098da638e2f1003d06b11810b6207aa4ca" Jan 30 17:26:32 crc kubenswrapper[4875]: I0130 17:26:32.273253 4875 scope.go:117] "RemoveContainer" containerID="2f0250f16c14f44d852ea9668f4d0c6a912d5207ea16bf8ff8f6716a22f8de15" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.358102 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq"] Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.364935 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-lg6bq"] Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.474169 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.474390 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerName="nova-kuttl-metadata-log" containerID="cri-o://bc7a92d89dd35a9af68f741d558bf205a2df49311806bd30ace880159130871b" gracePeriod=30 Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.474593 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://42a65d8fb1d7828764831a43992659c1e3ad1479b245f7f9ba899d441394d899" gracePeriod=30 Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.499509 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.499765 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="266fb2db-b1d7-4a1d-8581-2ef284916384" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://039adc2b6b4d851dd4d207487ca5257b522af43ebc72c98b7c4f8db7c96ef7ba" gracePeriod=30 Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.511664 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapidd3c-account-delete-dd5b2"] Jan 30 17:26:55 crc kubenswrapper[4875]: E0130 17:26:55.512078 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19625989-de41-4994-b07f-6d0880ba073c" containerName="extract-content" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.512097 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="19625989-de41-4994-b07f-6d0880ba073c" containerName="extract-content" Jan 30 17:26:55 crc kubenswrapper[4875]: E0130 17:26:55.512118 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19625989-de41-4994-b07f-6d0880ba073c" containerName="extract-utilities" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.512125 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="19625989-de41-4994-b07f-6d0880ba073c" containerName="extract-utilities" Jan 30 17:26:55 crc kubenswrapper[4875]: E0130 17:26:55.512145 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19625989-de41-4994-b07f-6d0880ba073c" containerName="registry-server" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.512151 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="19625989-de41-4994-b07f-6d0880ba073c" containerName="registry-server" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.512333 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="19625989-de41-4994-b07f-6d0880ba073c" containerName="registry-server" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.513112 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.519756 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapidd3c-account-delete-dd5b2"] Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.532841 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell0bf0b-account-delete-fscnl"] Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.534003 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.548614 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0bf0b-account-delete-fscnl"] Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.645199 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.645506 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="3ae65aa7-4fcd-4724-90ba-2a70bcf7472b" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://a66b72db7b2fdec520f275371d478d2c4ac23db968ce7e3511f943e6a03e2735" gracePeriod=30 Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.651556 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.651785 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="d6776e9b-c6c4-4b79-a16e-95c8d899bb94" containerName="nova-kuttl-api-log" containerID="cri-o://bce312694ce95c8b4e2417285a4914297fc2566503b63b3ccfef6a3d2112a1dc" gracePeriod=30 Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.651899 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="d6776e9b-c6c4-4b79-a16e-95c8d899bb94" containerName="nova-kuttl-api-api" containerID="cri-o://7d12b4edec8ba321d54cf7edc3d53cda4852cc1afa82bf3ff649751a28a48332" gracePeriod=30 Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.689326 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e137407-ca82-4025-947c-910890fb11a9-operator-scripts\") pod \"novaapidd3c-account-delete-dd5b2\" (UID: \"2e137407-ca82-4025-947c-910890fb11a9\") " pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.689474 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a70c584f-d07b-4d52-8188-51b3d332e80e-operator-scripts\") pod \"novacell0bf0b-account-delete-fscnl\" (UID: \"a70c584f-d07b-4d52-8188-51b3d332e80e\") " pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.689531 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxzfh\" (UniqueName: \"kubernetes.io/projected/2e137407-ca82-4025-947c-910890fb11a9-kube-api-access-jxzfh\") pod \"novaapidd3c-account-delete-dd5b2\" (UID: \"2e137407-ca82-4025-947c-910890fb11a9\") " pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.689631 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjgm9\" (UniqueName: \"kubernetes.io/projected/a70c584f-d07b-4d52-8188-51b3d332e80e-kube-api-access-wjgm9\") pod \"novacell0bf0b-account-delete-fscnl\" (UID: \"a70c584f-d07b-4d52-8188-51b3d332e80e\") " pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.690094 4875 generic.go:334] "Generic (PLEG): container finished" podID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerID="bc7a92d89dd35a9af68f741d558bf205a2df49311806bd30ace880159130871b" exitCode=143 Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.690133 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4391a03b-0c86-4610-a99f-0e4a1e1abce3","Type":"ContainerDied","Data":"bc7a92d89dd35a9af68f741d558bf205a2df49311806bd30ace880159130871b"} Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.791080 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a70c584f-d07b-4d52-8188-51b3d332e80e-operator-scripts\") pod \"novacell0bf0b-account-delete-fscnl\" (UID: \"a70c584f-d07b-4d52-8188-51b3d332e80e\") " pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.791153 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxzfh\" (UniqueName: \"kubernetes.io/projected/2e137407-ca82-4025-947c-910890fb11a9-kube-api-access-jxzfh\") pod \"novaapidd3c-account-delete-dd5b2\" (UID: \"2e137407-ca82-4025-947c-910890fb11a9\") " pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.791209 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjgm9\" (UniqueName: \"kubernetes.io/projected/a70c584f-d07b-4d52-8188-51b3d332e80e-kube-api-access-wjgm9\") pod \"novacell0bf0b-account-delete-fscnl\" (UID: \"a70c584f-d07b-4d52-8188-51b3d332e80e\") " pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.791273 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e137407-ca82-4025-947c-910890fb11a9-operator-scripts\") pod \"novaapidd3c-account-delete-dd5b2\" (UID: \"2e137407-ca82-4025-947c-910890fb11a9\") " pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.792074 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e137407-ca82-4025-947c-910890fb11a9-operator-scripts\") pod \"novaapidd3c-account-delete-dd5b2\" (UID: \"2e137407-ca82-4025-947c-910890fb11a9\") " pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.792086 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a70c584f-d07b-4d52-8188-51b3d332e80e-operator-scripts\") pod \"novacell0bf0b-account-delete-fscnl\" (UID: \"a70c584f-d07b-4d52-8188-51b3d332e80e\") " pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.813526 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjgm9\" (UniqueName: \"kubernetes.io/projected/a70c584f-d07b-4d52-8188-51b3d332e80e-kube-api-access-wjgm9\") pod \"novacell0bf0b-account-delete-fscnl\" (UID: \"a70c584f-d07b-4d52-8188-51b3d332e80e\") " pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.821669 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxzfh\" (UniqueName: \"kubernetes.io/projected/2e137407-ca82-4025-947c-910890fb11a9-kube-api-access-jxzfh\") pod \"novaapidd3c-account-delete-dd5b2\" (UID: \"2e137407-ca82-4025-947c-910890fb11a9\") " pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.833992 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" Jan 30 17:26:55 crc kubenswrapper[4875]: I0130 17:26:55.859103 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" Jan 30 17:26:56 crc kubenswrapper[4875]: I0130 17:26:56.145170 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44a7e857-e4b7-491a-b003-ca6a71e3bc08" path="/var/lib/kubelet/pods/44a7e857-e4b7-491a-b003-ca6a71e3bc08/volumes" Jan 30 17:26:56 crc kubenswrapper[4875]: W0130 17:26:56.348881 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e137407_ca82_4025_947c_910890fb11a9.slice/crio-7355217a13de368b80603e162d320a54e9617b9cbe3ca46bbb63866e5e0cf0a7 WatchSource:0}: Error finding container 7355217a13de368b80603e162d320a54e9617b9cbe3ca46bbb63866e5e0cf0a7: Status 404 returned error can't find the container with id 7355217a13de368b80603e162d320a54e9617b9cbe3ca46bbb63866e5e0cf0a7 Jan 30 17:26:56 crc kubenswrapper[4875]: I0130 17:26:56.350852 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapidd3c-account-delete-dd5b2"] Jan 30 17:26:56 crc kubenswrapper[4875]: W0130 17:26:56.356652 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda70c584f_d07b_4d52_8188_51b3d332e80e.slice/crio-74de48d15f309422ac12b997dc47e5f9cc538e00e3247f8b71669b9014df4829 WatchSource:0}: Error finding container 74de48d15f309422ac12b997dc47e5f9cc538e00e3247f8b71669b9014df4829: Status 404 returned error can't find the container with id 74de48d15f309422ac12b997dc47e5f9cc538e00e3247f8b71669b9014df4829 Jan 30 17:26:56 crc kubenswrapper[4875]: I0130 17:26:56.360630 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0bf0b-account-delete-fscnl"] Jan 30 17:26:56 crc kubenswrapper[4875]: I0130 17:26:56.699034 4875 generic.go:334] "Generic (PLEG): container finished" podID="d6776e9b-c6c4-4b79-a16e-95c8d899bb94" containerID="bce312694ce95c8b4e2417285a4914297fc2566503b63b3ccfef6a3d2112a1dc" exitCode=143 Jan 30 17:26:56 crc kubenswrapper[4875]: I0130 17:26:56.699113 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d6776e9b-c6c4-4b79-a16e-95c8d899bb94","Type":"ContainerDied","Data":"bce312694ce95c8b4e2417285a4914297fc2566503b63b3ccfef6a3d2112a1dc"} Jan 30 17:26:56 crc kubenswrapper[4875]: I0130 17:26:56.701061 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" event={"ID":"a70c584f-d07b-4d52-8188-51b3d332e80e","Type":"ContainerStarted","Data":"8ad7a84a9a0f8ecde34599f4cbadc73becf21c82ce295d936758120386a061bc"} Jan 30 17:26:56 crc kubenswrapper[4875]: I0130 17:26:56.701104 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" event={"ID":"a70c584f-d07b-4d52-8188-51b3d332e80e","Type":"ContainerStarted","Data":"74de48d15f309422ac12b997dc47e5f9cc538e00e3247f8b71669b9014df4829"} Jan 30 17:26:56 crc kubenswrapper[4875]: I0130 17:26:56.702136 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" event={"ID":"2e137407-ca82-4025-947c-910890fb11a9","Type":"ContainerStarted","Data":"d0810e56920bc76be7cd83273db37e38c94236b7920caf67465d7efb61e2d763"} Jan 30 17:26:56 crc kubenswrapper[4875]: I0130 17:26:56.702152 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" event={"ID":"2e137407-ca82-4025-947c-910890fb11a9","Type":"ContainerStarted","Data":"7355217a13de368b80603e162d320a54e9617b9cbe3ca46bbb63866e5e0cf0a7"} Jan 30 17:26:56 crc kubenswrapper[4875]: I0130 17:26:56.717197 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" podStartSLOduration=1.7171749859999998 podStartE2EDuration="1.717174986s" podCreationTimestamp="2026-01-30 17:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:26:56.713985749 +0000 UTC m=+1827.261349142" watchObservedRunningTime="2026-01-30 17:26:56.717174986 +0000 UTC m=+1827.264538369" Jan 30 17:26:57 crc kubenswrapper[4875]: I0130 17:26:57.711946 4875 generic.go:334] "Generic (PLEG): container finished" podID="a70c584f-d07b-4d52-8188-51b3d332e80e" containerID="8ad7a84a9a0f8ecde34599f4cbadc73becf21c82ce295d936758120386a061bc" exitCode=0 Jan 30 17:26:57 crc kubenswrapper[4875]: I0130 17:26:57.712046 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" event={"ID":"a70c584f-d07b-4d52-8188-51b3d332e80e","Type":"ContainerDied","Data":"8ad7a84a9a0f8ecde34599f4cbadc73becf21c82ce295d936758120386a061bc"} Jan 30 17:26:57 crc kubenswrapper[4875]: I0130 17:26:57.715713 4875 generic.go:334] "Generic (PLEG): container finished" podID="2e137407-ca82-4025-947c-910890fb11a9" containerID="d0810e56920bc76be7cd83273db37e38c94236b7920caf67465d7efb61e2d763" exitCode=0 Jan 30 17:26:57 crc kubenswrapper[4875]: I0130 17:26:57.715772 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" event={"ID":"2e137407-ca82-4025-947c-910890fb11a9","Type":"ContainerDied","Data":"d0810e56920bc76be7cd83273db37e38c94236b7920caf67465d7efb61e2d763"} Jan 30 17:26:57 crc kubenswrapper[4875]: I0130 17:26:57.738039 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" podStartSLOduration=2.738010354 podStartE2EDuration="2.738010354s" podCreationTimestamp="2026-01-30 17:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:26:56.731779577 +0000 UTC m=+1827.279142960" watchObservedRunningTime="2026-01-30 17:26:57.738010354 +0000 UTC m=+1828.285373747" Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.675976 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.728244 4875 generic.go:334] "Generic (PLEG): container finished" podID="3ae65aa7-4fcd-4724-90ba-2a70bcf7472b" containerID="a66b72db7b2fdec520f275371d478d2c4ac23db968ce7e3511f943e6a03e2735" exitCode=0 Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.728429 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.728817 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b","Type":"ContainerDied","Data":"a66b72db7b2fdec520f275371d478d2c4ac23db968ce7e3511f943e6a03e2735"} Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.728910 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b","Type":"ContainerDied","Data":"bc2889586f1bd9bcd13d2d535b9e3e13a9a16dfac45aae4ffa2fb4ea286441a5"} Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.728936 4875 scope.go:117] "RemoveContainer" containerID="a66b72db7b2fdec520f275371d478d2c4ac23db968ce7e3511f943e6a03e2735" Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.817958 4875 scope.go:117] "RemoveContainer" containerID="a66b72db7b2fdec520f275371d478d2c4ac23db968ce7e3511f943e6a03e2735" Jan 30 17:26:58 crc kubenswrapper[4875]: E0130 17:26:58.828013 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66b72db7b2fdec520f275371d478d2c4ac23db968ce7e3511f943e6a03e2735\": container with ID starting with a66b72db7b2fdec520f275371d478d2c4ac23db968ce7e3511f943e6a03e2735 not found: ID does not exist" containerID="a66b72db7b2fdec520f275371d478d2c4ac23db968ce7e3511f943e6a03e2735" Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.828055 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66b72db7b2fdec520f275371d478d2c4ac23db968ce7e3511f943e6a03e2735"} err="failed to get container status \"a66b72db7b2fdec520f275371d478d2c4ac23db968ce7e3511f943e6a03e2735\": rpc error: code = NotFound desc = could not find container \"a66b72db7b2fdec520f275371d478d2c4ac23db968ce7e3511f943e6a03e2735\": container with ID starting with a66b72db7b2fdec520f275371d478d2c4ac23db968ce7e3511f943e6a03e2735 not found: ID does not exist" Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.836173 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae65aa7-4fcd-4724-90ba-2a70bcf7472b-config-data\") pod \"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b\" (UID: \"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b\") " Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.836219 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8m2f\" (UniqueName: \"kubernetes.io/projected/3ae65aa7-4fcd-4724-90ba-2a70bcf7472b-kube-api-access-f8m2f\") pod \"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b\" (UID: \"3ae65aa7-4fcd-4724-90ba-2a70bcf7472b\") " Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.843315 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ae65aa7-4fcd-4724-90ba-2a70bcf7472b-kube-api-access-f8m2f" (OuterVolumeSpecName: "kube-api-access-f8m2f") pod "3ae65aa7-4fcd-4724-90ba-2a70bcf7472b" (UID: "3ae65aa7-4fcd-4724-90ba-2a70bcf7472b"). InnerVolumeSpecName "kube-api-access-f8m2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.867159 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae65aa7-4fcd-4724-90ba-2a70bcf7472b-config-data" (OuterVolumeSpecName: "config-data") pod "3ae65aa7-4fcd-4724-90ba-2a70bcf7472b" (UID: "3ae65aa7-4fcd-4724-90ba-2a70bcf7472b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.880053 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.146:8775/\": read tcp 10.217.0.2:39210->10.217.0.146:8775: read: connection reset by peer" Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.880087 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.146:8775/\": read tcp 10.217.0.2:39198->10.217.0.146:8775: read: connection reset by peer" Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.940676 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae65aa7-4fcd-4724-90ba-2a70bcf7472b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:58 crc kubenswrapper[4875]: I0130 17:26:58.940716 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8m2f\" (UniqueName: \"kubernetes.io/projected/3ae65aa7-4fcd-4724-90ba-2a70bcf7472b-kube-api-access-f8m2f\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.104992 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.114645 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.118058 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.154493 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.245845 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e137407-ca82-4025-947c-910890fb11a9-operator-scripts\") pod \"2e137407-ca82-4025-947c-910890fb11a9\" (UID: \"2e137407-ca82-4025-947c-910890fb11a9\") " Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.245934 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a70c584f-d07b-4d52-8188-51b3d332e80e-operator-scripts\") pod \"a70c584f-d07b-4d52-8188-51b3d332e80e\" (UID: \"a70c584f-d07b-4d52-8188-51b3d332e80e\") " Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.245972 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxzfh\" (UniqueName: \"kubernetes.io/projected/2e137407-ca82-4025-947c-910890fb11a9-kube-api-access-jxzfh\") pod \"2e137407-ca82-4025-947c-910890fb11a9\" (UID: \"2e137407-ca82-4025-947c-910890fb11a9\") " Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.246004 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjgm9\" (UniqueName: \"kubernetes.io/projected/a70c584f-d07b-4d52-8188-51b3d332e80e-kube-api-access-wjgm9\") pod \"a70c584f-d07b-4d52-8188-51b3d332e80e\" (UID: \"a70c584f-d07b-4d52-8188-51b3d332e80e\") " Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.246816 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e137407-ca82-4025-947c-910890fb11a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e137407-ca82-4025-947c-910890fb11a9" (UID: "2e137407-ca82-4025-947c-910890fb11a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.246865 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70c584f-d07b-4d52-8188-51b3d332e80e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a70c584f-d07b-4d52-8188-51b3d332e80e" (UID: "a70c584f-d07b-4d52-8188-51b3d332e80e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.252754 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a70c584f-d07b-4d52-8188-51b3d332e80e-kube-api-access-wjgm9" (OuterVolumeSpecName: "kube-api-access-wjgm9") pod "a70c584f-d07b-4d52-8188-51b3d332e80e" (UID: "a70c584f-d07b-4d52-8188-51b3d332e80e"). InnerVolumeSpecName "kube-api-access-wjgm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.263862 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e137407-ca82-4025-947c-910890fb11a9-kube-api-access-jxzfh" (OuterVolumeSpecName: "kube-api-access-jxzfh") pod "2e137407-ca82-4025-947c-910890fb11a9" (UID: "2e137407-ca82-4025-947c-910890fb11a9"). InnerVolumeSpecName "kube-api-access-jxzfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.304369 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.326228 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.347602 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4391a03b-0c86-4610-a99f-0e4a1e1abce3-logs\") pod \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\" (UID: \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\") " Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.347657 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5znb\" (UniqueName: \"kubernetes.io/projected/4391a03b-0c86-4610-a99f-0e4a1e1abce3-kube-api-access-x5znb\") pod \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\" (UID: \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\") " Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.347706 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266fb2db-b1d7-4a1d-8581-2ef284916384-config-data\") pod \"266fb2db-b1d7-4a1d-8581-2ef284916384\" (UID: \"266fb2db-b1d7-4a1d-8581-2ef284916384\") " Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.347726 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5xkk\" (UniqueName: \"kubernetes.io/projected/266fb2db-b1d7-4a1d-8581-2ef284916384-kube-api-access-p5xkk\") pod \"266fb2db-b1d7-4a1d-8581-2ef284916384\" (UID: \"266fb2db-b1d7-4a1d-8581-2ef284916384\") " Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.347765 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4391a03b-0c86-4610-a99f-0e4a1e1abce3-config-data\") pod \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\" (UID: \"4391a03b-0c86-4610-a99f-0e4a1e1abce3\") " Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.347966 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a70c584f-d07b-4d52-8188-51b3d332e80e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.347983 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxzfh\" (UniqueName: \"kubernetes.io/projected/2e137407-ca82-4025-947c-910890fb11a9-kube-api-access-jxzfh\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.348080 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjgm9\" (UniqueName: \"kubernetes.io/projected/a70c584f-d07b-4d52-8188-51b3d332e80e-kube-api-access-wjgm9\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.348095 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e137407-ca82-4025-947c-910890fb11a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.349242 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4391a03b-0c86-4610-a99f-0e4a1e1abce3-logs" (OuterVolumeSpecName: "logs") pod "4391a03b-0c86-4610-a99f-0e4a1e1abce3" (UID: "4391a03b-0c86-4610-a99f-0e4a1e1abce3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.352288 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4391a03b-0c86-4610-a99f-0e4a1e1abce3-kube-api-access-x5znb" (OuterVolumeSpecName: "kube-api-access-x5znb") pod "4391a03b-0c86-4610-a99f-0e4a1e1abce3" (UID: "4391a03b-0c86-4610-a99f-0e4a1e1abce3"). InnerVolumeSpecName "kube-api-access-x5znb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.356697 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266fb2db-b1d7-4a1d-8581-2ef284916384-kube-api-access-p5xkk" (OuterVolumeSpecName: "kube-api-access-p5xkk") pod "266fb2db-b1d7-4a1d-8581-2ef284916384" (UID: "266fb2db-b1d7-4a1d-8581-2ef284916384"). InnerVolumeSpecName "kube-api-access-p5xkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.367400 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4391a03b-0c86-4610-a99f-0e4a1e1abce3-config-data" (OuterVolumeSpecName: "config-data") pod "4391a03b-0c86-4610-a99f-0e4a1e1abce3" (UID: "4391a03b-0c86-4610-a99f-0e4a1e1abce3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.374524 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266fb2db-b1d7-4a1d-8581-2ef284916384-config-data" (OuterVolumeSpecName: "config-data") pod "266fb2db-b1d7-4a1d-8581-2ef284916384" (UID: "266fb2db-b1d7-4a1d-8581-2ef284916384"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.449142 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4391a03b-0c86-4610-a99f-0e4a1e1abce3-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.449408 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5znb\" (UniqueName: \"kubernetes.io/projected/4391a03b-0c86-4610-a99f-0e4a1e1abce3-kube-api-access-x5znb\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.449482 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266fb2db-b1d7-4a1d-8581-2ef284916384-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.449551 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5xkk\" (UniqueName: \"kubernetes.io/projected/266fb2db-b1d7-4a1d-8581-2ef284916384-kube-api-access-p5xkk\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.449634 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4391a03b-0c86-4610-a99f-0e4a1e1abce3-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.649274 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.652468 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-logs\") pod \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\" (UID: \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\") " Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.652535 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89t5p\" (UniqueName: \"kubernetes.io/projected/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-kube-api-access-89t5p\") pod \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\" (UID: \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\") " Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.652658 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-config-data\") pod \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\" (UID: \"d6776e9b-c6c4-4b79-a16e-95c8d899bb94\") " Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.653893 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-logs" (OuterVolumeSpecName: "logs") pod "d6776e9b-c6c4-4b79-a16e-95c8d899bb94" (UID: "d6776e9b-c6c4-4b79-a16e-95c8d899bb94"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.656790 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-kube-api-access-89t5p" (OuterVolumeSpecName: "kube-api-access-89t5p") pod "d6776e9b-c6c4-4b79-a16e-95c8d899bb94" (UID: "d6776e9b-c6c4-4b79-a16e-95c8d899bb94"). InnerVolumeSpecName "kube-api-access-89t5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.684648 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-config-data" (OuterVolumeSpecName: "config-data") pod "d6776e9b-c6c4-4b79-a16e-95c8d899bb94" (UID: "d6776e9b-c6c4-4b79-a16e-95c8d899bb94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.739872 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" event={"ID":"a70c584f-d07b-4d52-8188-51b3d332e80e","Type":"ContainerDied","Data":"74de48d15f309422ac12b997dc47e5f9cc538e00e3247f8b71669b9014df4829"} Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.739926 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74de48d15f309422ac12b997dc47e5f9cc538e00e3247f8b71669b9014df4829" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.739938 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0bf0b-account-delete-fscnl" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.744760 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.745807 4875 generic.go:334] "Generic (PLEG): container finished" podID="266fb2db-b1d7-4a1d-8581-2ef284916384" containerID="039adc2b6b4d851dd4d207487ca5257b522af43ebc72c98b7c4f8db7c96ef7ba" exitCode=0 Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.744789 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"266fb2db-b1d7-4a1d-8581-2ef284916384","Type":"ContainerDied","Data":"039adc2b6b4d851dd4d207487ca5257b522af43ebc72c98b7c4f8db7c96ef7ba"} Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.746059 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"266fb2db-b1d7-4a1d-8581-2ef284916384","Type":"ContainerDied","Data":"b8c6754749e85c9676d0f4403791206d2920e2636e7b792d6acf15ea1c1bb9dc"} Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.746088 4875 scope.go:117] "RemoveContainer" containerID="039adc2b6b4d851dd4d207487ca5257b522af43ebc72c98b7c4f8db7c96ef7ba" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.752931 4875 generic.go:334] "Generic (PLEG): container finished" podID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerID="42a65d8fb1d7828764831a43992659c1e3ad1479b245f7f9ba899d441394d899" exitCode=0 Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.752997 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4391a03b-0c86-4610-a99f-0e4a1e1abce3","Type":"ContainerDied","Data":"42a65d8fb1d7828764831a43992659c1e3ad1479b245f7f9ba899d441394d899"} Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.753023 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4391a03b-0c86-4610-a99f-0e4a1e1abce3","Type":"ContainerDied","Data":"6c6087e62283e156a7c7931d6a5e66ce610bf8486076b62f7ed5550ad71ac40e"} Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.753023 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.753694 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.753719 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89t5p\" (UniqueName: \"kubernetes.io/projected/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-kube-api-access-89t5p\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.753733 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6776e9b-c6c4-4b79-a16e-95c8d899bb94-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.754786 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.754789 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapidd3c-account-delete-dd5b2" event={"ID":"2e137407-ca82-4025-947c-910890fb11a9","Type":"ContainerDied","Data":"7355217a13de368b80603e162d320a54e9617b9cbe3ca46bbb63866e5e0cf0a7"} Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.755031 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7355217a13de368b80603e162d320a54e9617b9cbe3ca46bbb63866e5e0cf0a7" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.764210 4875 generic.go:334] "Generic (PLEG): container finished" podID="d6776e9b-c6c4-4b79-a16e-95c8d899bb94" containerID="7d12b4edec8ba321d54cf7edc3d53cda4852cc1afa82bf3ff649751a28a48332" exitCode=0 Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.764251 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d6776e9b-c6c4-4b79-a16e-95c8d899bb94","Type":"ContainerDied","Data":"7d12b4edec8ba321d54cf7edc3d53cda4852cc1afa82bf3ff649751a28a48332"} Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.764275 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d6776e9b-c6c4-4b79-a16e-95c8d899bb94","Type":"ContainerDied","Data":"527e336ba7f908461adae85060c560629331916b56069e42fabc0286d840ae2e"} Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.764328 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.773212 4875 scope.go:117] "RemoveContainer" containerID="039adc2b6b4d851dd4d207487ca5257b522af43ebc72c98b7c4f8db7c96ef7ba" Jan 30 17:26:59 crc kubenswrapper[4875]: E0130 17:26:59.773675 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"039adc2b6b4d851dd4d207487ca5257b522af43ebc72c98b7c4f8db7c96ef7ba\": container with ID starting with 039adc2b6b4d851dd4d207487ca5257b522af43ebc72c98b7c4f8db7c96ef7ba not found: ID does not exist" containerID="039adc2b6b4d851dd4d207487ca5257b522af43ebc72c98b7c4f8db7c96ef7ba" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.773721 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"039adc2b6b4d851dd4d207487ca5257b522af43ebc72c98b7c4f8db7c96ef7ba"} err="failed to get container status \"039adc2b6b4d851dd4d207487ca5257b522af43ebc72c98b7c4f8db7c96ef7ba\": rpc error: code = NotFound desc = could not find container \"039adc2b6b4d851dd4d207487ca5257b522af43ebc72c98b7c4f8db7c96ef7ba\": container with ID starting with 039adc2b6b4d851dd4d207487ca5257b522af43ebc72c98b7c4f8db7c96ef7ba not found: ID does not exist" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.773751 4875 scope.go:117] "RemoveContainer" containerID="42a65d8fb1d7828764831a43992659c1e3ad1479b245f7f9ba899d441394d899" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.793868 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.797396 4875 scope.go:117] "RemoveContainer" containerID="bc7a92d89dd35a9af68f741d558bf205a2df49311806bd30ace880159130871b" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.804344 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.813205 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.820336 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.823654 4875 scope.go:117] "RemoveContainer" containerID="42a65d8fb1d7828764831a43992659c1e3ad1479b245f7f9ba899d441394d899" Jan 30 17:26:59 crc kubenswrapper[4875]: E0130 17:26:59.824177 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a65d8fb1d7828764831a43992659c1e3ad1479b245f7f9ba899d441394d899\": container with ID starting with 42a65d8fb1d7828764831a43992659c1e3ad1479b245f7f9ba899d441394d899 not found: ID does not exist" containerID="42a65d8fb1d7828764831a43992659c1e3ad1479b245f7f9ba899d441394d899" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.824217 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a65d8fb1d7828764831a43992659c1e3ad1479b245f7f9ba899d441394d899"} err="failed to get container status \"42a65d8fb1d7828764831a43992659c1e3ad1479b245f7f9ba899d441394d899\": rpc error: code = NotFound desc = could not find container \"42a65d8fb1d7828764831a43992659c1e3ad1479b245f7f9ba899d441394d899\": container with ID starting with 42a65d8fb1d7828764831a43992659c1e3ad1479b245f7f9ba899d441394d899 not found: ID does not exist" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.824241 4875 scope.go:117] "RemoveContainer" containerID="bc7a92d89dd35a9af68f741d558bf205a2df49311806bd30ace880159130871b" Jan 30 17:26:59 crc kubenswrapper[4875]: E0130 17:26:59.824535 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc7a92d89dd35a9af68f741d558bf205a2df49311806bd30ace880159130871b\": container with ID starting with bc7a92d89dd35a9af68f741d558bf205a2df49311806bd30ace880159130871b not found: ID does not exist" containerID="bc7a92d89dd35a9af68f741d558bf205a2df49311806bd30ace880159130871b" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.824576 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc7a92d89dd35a9af68f741d558bf205a2df49311806bd30ace880159130871b"} err="failed to get container status \"bc7a92d89dd35a9af68f741d558bf205a2df49311806bd30ace880159130871b\": rpc error: code = NotFound desc = could not find container \"bc7a92d89dd35a9af68f741d558bf205a2df49311806bd30ace880159130871b\": container with ID starting with bc7a92d89dd35a9af68f741d558bf205a2df49311806bd30ace880159130871b not found: ID does not exist" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.824624 4875 scope.go:117] "RemoveContainer" containerID="7d12b4edec8ba321d54cf7edc3d53cda4852cc1afa82bf3ff649751a28a48332" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.827281 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.834011 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.842275 4875 scope.go:117] "RemoveContainer" containerID="bce312694ce95c8b4e2417285a4914297fc2566503b63b3ccfef6a3d2112a1dc" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.861096 4875 scope.go:117] "RemoveContainer" containerID="7d12b4edec8ba321d54cf7edc3d53cda4852cc1afa82bf3ff649751a28a48332" Jan 30 17:26:59 crc kubenswrapper[4875]: E0130 17:26:59.862159 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d12b4edec8ba321d54cf7edc3d53cda4852cc1afa82bf3ff649751a28a48332\": container with ID starting with 7d12b4edec8ba321d54cf7edc3d53cda4852cc1afa82bf3ff649751a28a48332 not found: ID does not exist" containerID="7d12b4edec8ba321d54cf7edc3d53cda4852cc1afa82bf3ff649751a28a48332" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.862204 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d12b4edec8ba321d54cf7edc3d53cda4852cc1afa82bf3ff649751a28a48332"} err="failed to get container status \"7d12b4edec8ba321d54cf7edc3d53cda4852cc1afa82bf3ff649751a28a48332\": rpc error: code = NotFound desc = could not find container \"7d12b4edec8ba321d54cf7edc3d53cda4852cc1afa82bf3ff649751a28a48332\": container with ID starting with 7d12b4edec8ba321d54cf7edc3d53cda4852cc1afa82bf3ff649751a28a48332 not found: ID does not exist" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.862235 4875 scope.go:117] "RemoveContainer" containerID="bce312694ce95c8b4e2417285a4914297fc2566503b63b3ccfef6a3d2112a1dc" Jan 30 17:26:59 crc kubenswrapper[4875]: E0130 17:26:59.862799 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bce312694ce95c8b4e2417285a4914297fc2566503b63b3ccfef6a3d2112a1dc\": container with ID starting with bce312694ce95c8b4e2417285a4914297fc2566503b63b3ccfef6a3d2112a1dc not found: ID does not exist" containerID="bce312694ce95c8b4e2417285a4914297fc2566503b63b3ccfef6a3d2112a1dc" Jan 30 17:26:59 crc kubenswrapper[4875]: I0130 17:26:59.862852 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bce312694ce95c8b4e2417285a4914297fc2566503b63b3ccfef6a3d2112a1dc"} err="failed to get container status \"bce312694ce95c8b4e2417285a4914297fc2566503b63b3ccfef6a3d2112a1dc\": rpc error: code = NotFound desc = could not find container \"bce312694ce95c8b4e2417285a4914297fc2566503b63b3ccfef6a3d2112a1dc\": container with ID starting with bce312694ce95c8b4e2417285a4914297fc2566503b63b3ccfef6a3d2112a1dc not found: ID does not exist" Jan 30 17:27:00 crc kubenswrapper[4875]: I0130 17:27:00.151571 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="266fb2db-b1d7-4a1d-8581-2ef284916384" path="/var/lib/kubelet/pods/266fb2db-b1d7-4a1d-8581-2ef284916384/volumes" Jan 30 17:27:00 crc kubenswrapper[4875]: I0130 17:27:00.153047 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ae65aa7-4fcd-4724-90ba-2a70bcf7472b" path="/var/lib/kubelet/pods/3ae65aa7-4fcd-4724-90ba-2a70bcf7472b/volumes" Jan 30 17:27:00 crc kubenswrapper[4875]: I0130 17:27:00.154077 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" path="/var/lib/kubelet/pods/4391a03b-0c86-4610-a99f-0e4a1e1abce3/volumes" Jan 30 17:27:00 crc kubenswrapper[4875]: I0130 17:27:00.156073 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6776e9b-c6c4-4b79-a16e-95c8d899bb94" path="/var/lib/kubelet/pods/d6776e9b-c6c4-4b79-a16e-95c8d899bb94/volumes" Jan 30 17:27:00 crc kubenswrapper[4875]: I0130 17:27:00.538850 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapidd3c-account-delete-dd5b2"] Jan 30 17:27:00 crc kubenswrapper[4875]: I0130 17:27:00.550835 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapidd3c-account-delete-dd5b2"] Jan 30 17:27:00 crc kubenswrapper[4875]: I0130 17:27:00.629434 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell0bf0b-account-delete-fscnl"] Jan 30 17:27:00 crc kubenswrapper[4875]: I0130 17:27:00.634275 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell0bf0b-account-delete-fscnl"] Jan 30 17:27:02 crc kubenswrapper[4875]: I0130 17:27:02.144070 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e137407-ca82-4025-947c-910890fb11a9" path="/var/lib/kubelet/pods/2e137407-ca82-4025-947c-910890fb11a9/volumes" Jan 30 17:27:02 crc kubenswrapper[4875]: I0130 17:27:02.144798 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a70c584f-d07b-4d52-8188-51b3d332e80e" path="/var/lib/kubelet/pods/a70c584f-d07b-4d52-8188-51b3d332e80e/volumes" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.090488 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-4df7t"] Jan 30 17:27:03 crc kubenswrapper[4875]: E0130 17:27:03.090919 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a70c584f-d07b-4d52-8188-51b3d332e80e" containerName="mariadb-account-delete" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.090945 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70c584f-d07b-4d52-8188-51b3d332e80e" containerName="mariadb-account-delete" Jan 30 17:27:03 crc kubenswrapper[4875]: E0130 17:27:03.090972 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6776e9b-c6c4-4b79-a16e-95c8d899bb94" containerName="nova-kuttl-api-api" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.090981 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6776e9b-c6c4-4b79-a16e-95c8d899bb94" containerName="nova-kuttl-api-api" Jan 30 17:27:03 crc kubenswrapper[4875]: E0130 17:27:03.090998 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerName="nova-kuttl-metadata-log" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091010 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerName="nova-kuttl-metadata-log" Jan 30 17:27:03 crc kubenswrapper[4875]: E0130 17:27:03.091026 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266fb2db-b1d7-4a1d-8581-2ef284916384" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091036 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="266fb2db-b1d7-4a1d-8581-2ef284916384" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:27:03 crc kubenswrapper[4875]: E0130 17:27:03.091050 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerName="nova-kuttl-metadata-metadata" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091059 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerName="nova-kuttl-metadata-metadata" Jan 30 17:27:03 crc kubenswrapper[4875]: E0130 17:27:03.091079 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6776e9b-c6c4-4b79-a16e-95c8d899bb94" containerName="nova-kuttl-api-log" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091087 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6776e9b-c6c4-4b79-a16e-95c8d899bb94" containerName="nova-kuttl-api-log" Jan 30 17:27:03 crc kubenswrapper[4875]: E0130 17:27:03.091099 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e137407-ca82-4025-947c-910890fb11a9" containerName="mariadb-account-delete" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091107 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e137407-ca82-4025-947c-910890fb11a9" containerName="mariadb-account-delete" Jan 30 17:27:03 crc kubenswrapper[4875]: E0130 17:27:03.091125 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae65aa7-4fcd-4724-90ba-2a70bcf7472b" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091134 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae65aa7-4fcd-4724-90ba-2a70bcf7472b" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091326 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae65aa7-4fcd-4724-90ba-2a70bcf7472b" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091350 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerName="nova-kuttl-metadata-metadata" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091364 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="4391a03b-0c86-4610-a99f-0e4a1e1abce3" containerName="nova-kuttl-metadata-log" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091380 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e137407-ca82-4025-947c-910890fb11a9" containerName="mariadb-account-delete" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091392 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="266fb2db-b1d7-4a1d-8581-2ef284916384" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091402 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6776e9b-c6c4-4b79-a16e-95c8d899bb94" containerName="nova-kuttl-api-log" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091409 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6776e9b-c6c4-4b79-a16e-95c8d899bb94" containerName="nova-kuttl-api-api" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.091418 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="a70c584f-d07b-4d52-8188-51b3d332e80e" containerName="mariadb-account-delete" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.092139 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-4df7t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.098361 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26565c7a-594f-4ca2-b6c9-ea0527c04619-operator-scripts\") pod \"nova-api-db-create-4df7t\" (UID: \"26565c7a-594f-4ca2-b6c9-ea0527c04619\") " pod="nova-kuttl-default/nova-api-db-create-4df7t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.098429 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvdrh\" (UniqueName: \"kubernetes.io/projected/26565c7a-594f-4ca2-b6c9-ea0527c04619-kube-api-access-qvdrh\") pod \"nova-api-db-create-4df7t\" (UID: \"26565c7a-594f-4ca2-b6c9-ea0527c04619\") " pod="nova-kuttl-default/nova-api-db-create-4df7t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.107800 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-4df7t"] Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.191456 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-c6lw8"] Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.193474 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.199699 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26565c7a-594f-4ca2-b6c9-ea0527c04619-operator-scripts\") pod \"nova-api-db-create-4df7t\" (UID: \"26565c7a-594f-4ca2-b6c9-ea0527c04619\") " pod="nova-kuttl-default/nova-api-db-create-4df7t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.199756 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvdrh\" (UniqueName: \"kubernetes.io/projected/26565c7a-594f-4ca2-b6c9-ea0527c04619-kube-api-access-qvdrh\") pod \"nova-api-db-create-4df7t\" (UID: \"26565c7a-594f-4ca2-b6c9-ea0527c04619\") " pod="nova-kuttl-default/nova-api-db-create-4df7t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.199694 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-c6lw8"] Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.202664 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26565c7a-594f-4ca2-b6c9-ea0527c04619-operator-scripts\") pod \"nova-api-db-create-4df7t\" (UID: \"26565c7a-594f-4ca2-b6c9-ea0527c04619\") " pod="nova-kuttl-default/nova-api-db-create-4df7t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.234354 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvdrh\" (UniqueName: \"kubernetes.io/projected/26565c7a-594f-4ca2-b6c9-ea0527c04619-kube-api-access-qvdrh\") pod \"nova-api-db-create-4df7t\" (UID: \"26565c7a-594f-4ca2-b6c9-ea0527c04619\") " pod="nova-kuttl-default/nova-api-db-create-4df7t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.291388 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-7118-account-create-update-gqwx9"] Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.292550 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.294568 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.298578 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-7118-account-create-update-gqwx9"] Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.300786 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv6fp\" (UniqueName: \"kubernetes.io/projected/2d70a7be-3789-4619-9e33-7b2d249345bd-kube-api-access-qv6fp\") pod \"nova-cell0-db-create-c6lw8\" (UID: \"2d70a7be-3789-4619-9e33-7b2d249345bd\") " pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.300885 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d70a7be-3789-4619-9e33-7b2d249345bd-operator-scripts\") pod \"nova-cell0-db-create-c6lw8\" (UID: \"2d70a7be-3789-4619-9e33-7b2d249345bd\") " pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.403050 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dab1bdf5-00ae-422b-9edc-f663f448c46b-operator-scripts\") pod \"nova-api-7118-account-create-update-gqwx9\" (UID: \"dab1bdf5-00ae-422b-9edc-f663f448c46b\") " pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.403178 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv6fp\" (UniqueName: \"kubernetes.io/projected/2d70a7be-3789-4619-9e33-7b2d249345bd-kube-api-access-qv6fp\") pod \"nova-cell0-db-create-c6lw8\" (UID: \"2d70a7be-3789-4619-9e33-7b2d249345bd\") " pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.403264 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6rhx\" (UniqueName: \"kubernetes.io/projected/dab1bdf5-00ae-422b-9edc-f663f448c46b-kube-api-access-l6rhx\") pod \"nova-api-7118-account-create-update-gqwx9\" (UID: \"dab1bdf5-00ae-422b-9edc-f663f448c46b\") " pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.403293 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d70a7be-3789-4619-9e33-7b2d249345bd-operator-scripts\") pod \"nova-cell0-db-create-c6lw8\" (UID: \"2d70a7be-3789-4619-9e33-7b2d249345bd\") " pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.404276 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d70a7be-3789-4619-9e33-7b2d249345bd-operator-scripts\") pod \"nova-cell0-db-create-c6lw8\" (UID: \"2d70a7be-3789-4619-9e33-7b2d249345bd\") " pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.411183 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-4df7t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.421190 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv6fp\" (UniqueName: \"kubernetes.io/projected/2d70a7be-3789-4619-9e33-7b2d249345bd-kube-api-access-qv6fp\") pod \"nova-cell0-db-create-c6lw8\" (UID: \"2d70a7be-3789-4619-9e33-7b2d249345bd\") " pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.493341 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-r6npw"] Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.496635 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-r6npw" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.504846 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6rhx\" (UniqueName: \"kubernetes.io/projected/dab1bdf5-00ae-422b-9edc-f663f448c46b-kube-api-access-l6rhx\") pod \"nova-api-7118-account-create-update-gqwx9\" (UID: \"dab1bdf5-00ae-422b-9edc-f663f448c46b\") " pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.504901 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dab1bdf5-00ae-422b-9edc-f663f448c46b-operator-scripts\") pod \"nova-api-7118-account-create-update-gqwx9\" (UID: \"dab1bdf5-00ae-422b-9edc-f663f448c46b\") " pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.505611 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dab1bdf5-00ae-422b-9edc-f663f448c46b-operator-scripts\") pod \"nova-api-7118-account-create-update-gqwx9\" (UID: \"dab1bdf5-00ae-422b-9edc-f663f448c46b\") " pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.507449 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-r6npw"] Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.515132 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.521396 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t"] Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.522770 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.525635 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.530271 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6rhx\" (UniqueName: \"kubernetes.io/projected/dab1bdf5-00ae-422b-9edc-f663f448c46b-kube-api-access-l6rhx\") pod \"nova-api-7118-account-create-update-gqwx9\" (UID: \"dab1bdf5-00ae-422b-9edc-f663f448c46b\") " pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.557196 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t"] Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.607512 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwrh\" (UniqueName: \"kubernetes.io/projected/5f18684e-d712-4eee-ae0c-e2030de0676b-kube-api-access-xmwrh\") pod \"nova-cell1-db-create-r6npw\" (UID: \"5f18684e-d712-4eee-ae0c-e2030de0676b\") " pod="nova-kuttl-default/nova-cell1-db-create-r6npw" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.607564 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8e63a5d-4ddb-4c97-8204-bdd6342418bd-operator-scripts\") pod \"nova-cell0-94f9-account-create-update-7n96t\" (UID: \"d8e63a5d-4ddb-4c97-8204-bdd6342418bd\") " pod="nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.607876 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6djj\" (UniqueName: \"kubernetes.io/projected/d8e63a5d-4ddb-4c97-8204-bdd6342418bd-kube-api-access-l6djj\") pod \"nova-cell0-94f9-account-create-update-7n96t\" (UID: \"d8e63a5d-4ddb-4c97-8204-bdd6342418bd\") " pod="nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.607969 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f18684e-d712-4eee-ae0c-e2030de0676b-operator-scripts\") pod \"nova-cell1-db-create-r6npw\" (UID: \"5f18684e-d712-4eee-ae0c-e2030de0676b\") " pod="nova-kuttl-default/nova-cell1-db-create-r6npw" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.610307 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.709136 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6djj\" (UniqueName: \"kubernetes.io/projected/d8e63a5d-4ddb-4c97-8204-bdd6342418bd-kube-api-access-l6djj\") pod \"nova-cell0-94f9-account-create-update-7n96t\" (UID: \"d8e63a5d-4ddb-4c97-8204-bdd6342418bd\") " pod="nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.709191 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f18684e-d712-4eee-ae0c-e2030de0676b-operator-scripts\") pod \"nova-cell1-db-create-r6npw\" (UID: \"5f18684e-d712-4eee-ae0c-e2030de0676b\") " pod="nova-kuttl-default/nova-cell1-db-create-r6npw" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.709240 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmwrh\" (UniqueName: \"kubernetes.io/projected/5f18684e-d712-4eee-ae0c-e2030de0676b-kube-api-access-xmwrh\") pod \"nova-cell1-db-create-r6npw\" (UID: \"5f18684e-d712-4eee-ae0c-e2030de0676b\") " pod="nova-kuttl-default/nova-cell1-db-create-r6npw" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.709259 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8e63a5d-4ddb-4c97-8204-bdd6342418bd-operator-scripts\") pod \"nova-cell0-94f9-account-create-update-7n96t\" (UID: \"d8e63a5d-4ddb-4c97-8204-bdd6342418bd\") " pod="nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.710115 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f18684e-d712-4eee-ae0c-e2030de0676b-operator-scripts\") pod \"nova-cell1-db-create-r6npw\" (UID: \"5f18684e-d712-4eee-ae0c-e2030de0676b\") " pod="nova-kuttl-default/nova-cell1-db-create-r6npw" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.710137 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8e63a5d-4ddb-4c97-8204-bdd6342418bd-operator-scripts\") pod \"nova-cell0-94f9-account-create-update-7n96t\" (UID: \"d8e63a5d-4ddb-4c97-8204-bdd6342418bd\") " pod="nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.714656 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m"] Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.716356 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.720508 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m"] Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.724028 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.728119 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6djj\" (UniqueName: \"kubernetes.io/projected/d8e63a5d-4ddb-4c97-8204-bdd6342418bd-kube-api-access-l6djj\") pod \"nova-cell0-94f9-account-create-update-7n96t\" (UID: \"d8e63a5d-4ddb-4c97-8204-bdd6342418bd\") " pod="nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.732825 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmwrh\" (UniqueName: \"kubernetes.io/projected/5f18684e-d712-4eee-ae0c-e2030de0676b-kube-api-access-xmwrh\") pod \"nova-cell1-db-create-r6npw\" (UID: \"5f18684e-d712-4eee-ae0c-e2030de0676b\") " pod="nova-kuttl-default/nova-cell1-db-create-r6npw" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.811241 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4snwk\" (UniqueName: \"kubernetes.io/projected/387ad041-1225-4993-a8bc-7d63648e123a-kube-api-access-4snwk\") pod \"nova-cell1-88b2-account-create-update-m4t6m\" (UID: \"387ad041-1225-4993-a8bc-7d63648e123a\") " pod="nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.811303 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387ad041-1225-4993-a8bc-7d63648e123a-operator-scripts\") pod \"nova-cell1-88b2-account-create-update-m4t6m\" (UID: \"387ad041-1225-4993-a8bc-7d63648e123a\") " pod="nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.816865 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-r6npw" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.908299 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.913289 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4snwk\" (UniqueName: \"kubernetes.io/projected/387ad041-1225-4993-a8bc-7d63648e123a-kube-api-access-4snwk\") pod \"nova-cell1-88b2-account-create-update-m4t6m\" (UID: \"387ad041-1225-4993-a8bc-7d63648e123a\") " pod="nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.913385 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387ad041-1225-4993-a8bc-7d63648e123a-operator-scripts\") pod \"nova-cell1-88b2-account-create-update-m4t6m\" (UID: \"387ad041-1225-4993-a8bc-7d63648e123a\") " pod="nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.914250 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387ad041-1225-4993-a8bc-7d63648e123a-operator-scripts\") pod \"nova-cell1-88b2-account-create-update-m4t6m\" (UID: \"387ad041-1225-4993-a8bc-7d63648e123a\") " pod="nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.952556 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4snwk\" (UniqueName: \"kubernetes.io/projected/387ad041-1225-4993-a8bc-7d63648e123a-kube-api-access-4snwk\") pod \"nova-cell1-88b2-account-create-update-m4t6m\" (UID: \"387ad041-1225-4993-a8bc-7d63648e123a\") " pod="nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m" Jan 30 17:27:03 crc kubenswrapper[4875]: I0130 17:27:03.970735 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-4df7t"] Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.087766 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m" Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.107423 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-c6lw8"] Jan 30 17:27:04 crc kubenswrapper[4875]: W0130 17:27:04.133204 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d70a7be_3789_4619_9e33_7b2d249345bd.slice/crio-17f38195ff93959391abf32e9c413959755fbf936c060d4c54a07ec7c69e4e92 WatchSource:0}: Error finding container 17f38195ff93959391abf32e9c413959755fbf936c060d4c54a07ec7c69e4e92: Status 404 returned error can't find the container with id 17f38195ff93959391abf32e9c413959755fbf936c060d4c54a07ec7c69e4e92 Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.182039 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-7118-account-create-update-gqwx9"] Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.186161 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-r6npw"] Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.491662 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t"] Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.635165 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m"] Jan 30 17:27:04 crc kubenswrapper[4875]: W0130 17:27:04.641294 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod387ad041_1225_4993_a8bc_7d63648e123a.slice/crio-b9308ed4700186a7029e652351752f61f0a138a6c2d5df9670ed1cffabe91eca WatchSource:0}: Error finding container b9308ed4700186a7029e652351752f61f0a138a6c2d5df9670ed1cffabe91eca: Status 404 returned error can't find the container with id b9308ed4700186a7029e652351752f61f0a138a6c2d5df9670ed1cffabe91eca Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.836762 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-4df7t" event={"ID":"26565c7a-594f-4ca2-b6c9-ea0527c04619","Type":"ContainerStarted","Data":"ffcebc834d43459befd2e672b1e1b9a2c97b6252c714163806ce8712c364c5fb"} Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.836809 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-4df7t" event={"ID":"26565c7a-594f-4ca2-b6c9-ea0527c04619","Type":"ContainerStarted","Data":"e18a8d4efc0a767099a8b27cd190976869b3fe0f3c5a00d32636d5cdf21df83a"} Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.838601 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m" event={"ID":"387ad041-1225-4993-a8bc-7d63648e123a","Type":"ContainerStarted","Data":"b9308ed4700186a7029e652351752f61f0a138a6c2d5df9670ed1cffabe91eca"} Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.841188 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" event={"ID":"2d70a7be-3789-4619-9e33-7b2d249345bd","Type":"ContainerStarted","Data":"7131a5fc87461b1befe413ec73a906a9795e46a30c3a7c912be498a59cdb76e8"} Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.841226 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" event={"ID":"2d70a7be-3789-4619-9e33-7b2d249345bd","Type":"ContainerStarted","Data":"17f38195ff93959391abf32e9c413959755fbf936c060d4c54a07ec7c69e4e92"} Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.843403 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t" event={"ID":"d8e63a5d-4ddb-4c97-8204-bdd6342418bd","Type":"ContainerStarted","Data":"31a2129c16ec7374e4f2bba041eb76f49d5924710f21ce2ed7ae354197d4e721"} Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.846534 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-r6npw" event={"ID":"5f18684e-d712-4eee-ae0c-e2030de0676b","Type":"ContainerStarted","Data":"9fff9d8d9f07906d6e7a84d89cc9440aed1329c3a5c5d350600e526f00a7436f"} Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.846575 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-r6npw" event={"ID":"5f18684e-d712-4eee-ae0c-e2030de0676b","Type":"ContainerStarted","Data":"f2550cad8bd0fab0c0aac259ce20bdea94d8442cb8126a962476a245a7cc3a73"} Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.849703 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" event={"ID":"dab1bdf5-00ae-422b-9edc-f663f448c46b","Type":"ContainerStarted","Data":"5135aaed955a43e8e67672677d3c0535de6394613b6edba99a19074341436113"} Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.849919 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" event={"ID":"dab1bdf5-00ae-422b-9edc-f663f448c46b","Type":"ContainerStarted","Data":"fd0e93fc345fab66ea11733ceb28693112e97791c49d3640f221e0ea54b5e097"} Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.854905 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-db-create-4df7t" podStartSLOduration=1.854883887 podStartE2EDuration="1.854883887s" podCreationTimestamp="2026-01-30 17:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:04.850868185 +0000 UTC m=+1835.398231568" watchObservedRunningTime="2026-01-30 17:27:04.854883887 +0000 UTC m=+1835.402247270" Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.871303 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell1-db-create-r6npw" podStartSLOduration=1.871284282 podStartE2EDuration="1.871284282s" podCreationTimestamp="2026-01-30 17:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:04.868448756 +0000 UTC m=+1835.415812139" watchObservedRunningTime="2026-01-30 17:27:04.871284282 +0000 UTC m=+1835.418647665" Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.883523 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" podStartSLOduration=1.883506561 podStartE2EDuration="1.883506561s" podCreationTimestamp="2026-01-30 17:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:04.882375216 +0000 UTC m=+1835.429738619" watchObservedRunningTime="2026-01-30 17:27:04.883506561 +0000 UTC m=+1835.430869934" Jan 30 17:27:04 crc kubenswrapper[4875]: I0130 17:27:04.900647 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" podStartSLOduration=1.9006303180000002 podStartE2EDuration="1.900630318s" podCreationTimestamp="2026-01-30 17:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:04.895901474 +0000 UTC m=+1835.443264857" watchObservedRunningTime="2026-01-30 17:27:04.900630318 +0000 UTC m=+1835.447993701" Jan 30 17:27:05 crc kubenswrapper[4875]: I0130 17:27:05.860825 4875 generic.go:334] "Generic (PLEG): container finished" podID="387ad041-1225-4993-a8bc-7d63648e123a" containerID="18a7ab848c358b391a6491ffb397203e51c07cef2e2d9b7874e3ee22c65212e7" exitCode=0 Jan 30 17:27:05 crc kubenswrapper[4875]: I0130 17:27:05.861020 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m" event={"ID":"387ad041-1225-4993-a8bc-7d63648e123a","Type":"ContainerDied","Data":"18a7ab848c358b391a6491ffb397203e51c07cef2e2d9b7874e3ee22c65212e7"} Jan 30 17:27:05 crc kubenswrapper[4875]: I0130 17:27:05.865037 4875 generic.go:334] "Generic (PLEG): container finished" podID="2d70a7be-3789-4619-9e33-7b2d249345bd" containerID="7131a5fc87461b1befe413ec73a906a9795e46a30c3a7c912be498a59cdb76e8" exitCode=0 Jan 30 17:27:05 crc kubenswrapper[4875]: I0130 17:27:05.865153 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" event={"ID":"2d70a7be-3789-4619-9e33-7b2d249345bd","Type":"ContainerDied","Data":"7131a5fc87461b1befe413ec73a906a9795e46a30c3a7c912be498a59cdb76e8"} Jan 30 17:27:05 crc kubenswrapper[4875]: I0130 17:27:05.870736 4875 generic.go:334] "Generic (PLEG): container finished" podID="d8e63a5d-4ddb-4c97-8204-bdd6342418bd" containerID="4c435c7ff80e5e5253664e8e72ca2c8f0719ce98d14380cfc6f3755cd26ea028" exitCode=0 Jan 30 17:27:05 crc kubenswrapper[4875]: I0130 17:27:05.870813 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t" event={"ID":"d8e63a5d-4ddb-4c97-8204-bdd6342418bd","Type":"ContainerDied","Data":"4c435c7ff80e5e5253664e8e72ca2c8f0719ce98d14380cfc6f3755cd26ea028"} Jan 30 17:27:05 crc kubenswrapper[4875]: I0130 17:27:05.874181 4875 generic.go:334] "Generic (PLEG): container finished" podID="5f18684e-d712-4eee-ae0c-e2030de0676b" containerID="9fff9d8d9f07906d6e7a84d89cc9440aed1329c3a5c5d350600e526f00a7436f" exitCode=0 Jan 30 17:27:05 crc kubenswrapper[4875]: I0130 17:27:05.874245 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-r6npw" event={"ID":"5f18684e-d712-4eee-ae0c-e2030de0676b","Type":"ContainerDied","Data":"9fff9d8d9f07906d6e7a84d89cc9440aed1329c3a5c5d350600e526f00a7436f"} Jan 30 17:27:05 crc kubenswrapper[4875]: I0130 17:27:05.876976 4875 generic.go:334] "Generic (PLEG): container finished" podID="dab1bdf5-00ae-422b-9edc-f663f448c46b" containerID="5135aaed955a43e8e67672677d3c0535de6394613b6edba99a19074341436113" exitCode=0 Jan 30 17:27:05 crc kubenswrapper[4875]: I0130 17:27:05.877317 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" event={"ID":"dab1bdf5-00ae-422b-9edc-f663f448c46b","Type":"ContainerDied","Data":"5135aaed955a43e8e67672677d3c0535de6394613b6edba99a19074341436113"} Jan 30 17:27:05 crc kubenswrapper[4875]: I0130 17:27:05.880037 4875 generic.go:334] "Generic (PLEG): container finished" podID="26565c7a-594f-4ca2-b6c9-ea0527c04619" containerID="ffcebc834d43459befd2e672b1e1b9a2c97b6252c714163806ce8712c364c5fb" exitCode=0 Jan 30 17:27:05 crc kubenswrapper[4875]: I0130 17:27:05.880092 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-4df7t" event={"ID":"26565c7a-594f-4ca2-b6c9-ea0527c04619","Type":"ContainerDied","Data":"ffcebc834d43459befd2e672b1e1b9a2c97b6252c714163806ce8712c364c5fb"} Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.302777 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-r6npw" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.488266 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f18684e-d712-4eee-ae0c-e2030de0676b-operator-scripts\") pod \"5f18684e-d712-4eee-ae0c-e2030de0676b\" (UID: \"5f18684e-d712-4eee-ae0c-e2030de0676b\") " Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.488339 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmwrh\" (UniqueName: \"kubernetes.io/projected/5f18684e-d712-4eee-ae0c-e2030de0676b-kube-api-access-xmwrh\") pod \"5f18684e-d712-4eee-ae0c-e2030de0676b\" (UID: \"5f18684e-d712-4eee-ae0c-e2030de0676b\") " Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.489670 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f18684e-d712-4eee-ae0c-e2030de0676b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f18684e-d712-4eee-ae0c-e2030de0676b" (UID: "5f18684e-d712-4eee-ae0c-e2030de0676b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.501936 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f18684e-d712-4eee-ae0c-e2030de0676b-kube-api-access-xmwrh" (OuterVolumeSpecName: "kube-api-access-xmwrh") pod "5f18684e-d712-4eee-ae0c-e2030de0676b" (UID: "5f18684e-d712-4eee-ae0c-e2030de0676b"). InnerVolumeSpecName "kube-api-access-xmwrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.558607 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.567912 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.578783 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.586434 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.591073 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f18684e-d712-4eee-ae0c-e2030de0676b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.591134 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmwrh\" (UniqueName: \"kubernetes.io/projected/5f18684e-d712-4eee-ae0c-e2030de0676b-kube-api-access-xmwrh\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.594257 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-4df7t" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.692230 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d70a7be-3789-4619-9e33-7b2d249345bd-operator-scripts\") pod \"2d70a7be-3789-4619-9e33-7b2d249345bd\" (UID: \"2d70a7be-3789-4619-9e33-7b2d249345bd\") " Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.692348 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvdrh\" (UniqueName: \"kubernetes.io/projected/26565c7a-594f-4ca2-b6c9-ea0527c04619-kube-api-access-qvdrh\") pod \"26565c7a-594f-4ca2-b6c9-ea0527c04619\" (UID: \"26565c7a-594f-4ca2-b6c9-ea0527c04619\") " Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.692418 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4snwk\" (UniqueName: \"kubernetes.io/projected/387ad041-1225-4993-a8bc-7d63648e123a-kube-api-access-4snwk\") pod \"387ad041-1225-4993-a8bc-7d63648e123a\" (UID: \"387ad041-1225-4993-a8bc-7d63648e123a\") " Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.692495 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387ad041-1225-4993-a8bc-7d63648e123a-operator-scripts\") pod \"387ad041-1225-4993-a8bc-7d63648e123a\" (UID: \"387ad041-1225-4993-a8bc-7d63648e123a\") " Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.692555 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6djj\" (UniqueName: \"kubernetes.io/projected/d8e63a5d-4ddb-4c97-8204-bdd6342418bd-kube-api-access-l6djj\") pod \"d8e63a5d-4ddb-4c97-8204-bdd6342418bd\" (UID: \"d8e63a5d-4ddb-4c97-8204-bdd6342418bd\") " Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.692674 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26565c7a-594f-4ca2-b6c9-ea0527c04619-operator-scripts\") pod \"26565c7a-594f-4ca2-b6c9-ea0527c04619\" (UID: \"26565c7a-594f-4ca2-b6c9-ea0527c04619\") " Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.692728 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8e63a5d-4ddb-4c97-8204-bdd6342418bd-operator-scripts\") pod \"d8e63a5d-4ddb-4c97-8204-bdd6342418bd\" (UID: \"d8e63a5d-4ddb-4c97-8204-bdd6342418bd\") " Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.692787 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6rhx\" (UniqueName: \"kubernetes.io/projected/dab1bdf5-00ae-422b-9edc-f663f448c46b-kube-api-access-l6rhx\") pod \"dab1bdf5-00ae-422b-9edc-f663f448c46b\" (UID: \"dab1bdf5-00ae-422b-9edc-f663f448c46b\") " Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.692831 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dab1bdf5-00ae-422b-9edc-f663f448c46b-operator-scripts\") pod \"dab1bdf5-00ae-422b-9edc-f663f448c46b\" (UID: \"dab1bdf5-00ae-422b-9edc-f663f448c46b\") " Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.692874 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv6fp\" (UniqueName: \"kubernetes.io/projected/2d70a7be-3789-4619-9e33-7b2d249345bd-kube-api-access-qv6fp\") pod \"2d70a7be-3789-4619-9e33-7b2d249345bd\" (UID: \"2d70a7be-3789-4619-9e33-7b2d249345bd\") " Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.693310 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26565c7a-594f-4ca2-b6c9-ea0527c04619-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26565c7a-594f-4ca2-b6c9-ea0527c04619" (UID: "26565c7a-594f-4ca2-b6c9-ea0527c04619"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.693358 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dab1bdf5-00ae-422b-9edc-f663f448c46b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dab1bdf5-00ae-422b-9edc-f663f448c46b" (UID: "dab1bdf5-00ae-422b-9edc-f663f448c46b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.693363 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8e63a5d-4ddb-4c97-8204-bdd6342418bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8e63a5d-4ddb-4c97-8204-bdd6342418bd" (UID: "d8e63a5d-4ddb-4c97-8204-bdd6342418bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.693748 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/387ad041-1225-4993-a8bc-7d63648e123a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "387ad041-1225-4993-a8bc-7d63648e123a" (UID: "387ad041-1225-4993-a8bc-7d63648e123a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.693810 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d70a7be-3789-4619-9e33-7b2d249345bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d70a7be-3789-4619-9e33-7b2d249345bd" (UID: "2d70a7be-3789-4619-9e33-7b2d249345bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.696891 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d70a7be-3789-4619-9e33-7b2d249345bd-kube-api-access-qv6fp" (OuterVolumeSpecName: "kube-api-access-qv6fp") pod "2d70a7be-3789-4619-9e33-7b2d249345bd" (UID: "2d70a7be-3789-4619-9e33-7b2d249345bd"). InnerVolumeSpecName "kube-api-access-qv6fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.696949 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26565c7a-594f-4ca2-b6c9-ea0527c04619-kube-api-access-qvdrh" (OuterVolumeSpecName: "kube-api-access-qvdrh") pod "26565c7a-594f-4ca2-b6c9-ea0527c04619" (UID: "26565c7a-594f-4ca2-b6c9-ea0527c04619"). InnerVolumeSpecName "kube-api-access-qvdrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.697465 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387ad041-1225-4993-a8bc-7d63648e123a-kube-api-access-4snwk" (OuterVolumeSpecName: "kube-api-access-4snwk") pod "387ad041-1225-4993-a8bc-7d63648e123a" (UID: "387ad041-1225-4993-a8bc-7d63648e123a"). InnerVolumeSpecName "kube-api-access-4snwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.699055 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e63a5d-4ddb-4c97-8204-bdd6342418bd-kube-api-access-l6djj" (OuterVolumeSpecName: "kube-api-access-l6djj") pod "d8e63a5d-4ddb-4c97-8204-bdd6342418bd" (UID: "d8e63a5d-4ddb-4c97-8204-bdd6342418bd"). InnerVolumeSpecName "kube-api-access-l6djj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.699173 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab1bdf5-00ae-422b-9edc-f663f448c46b-kube-api-access-l6rhx" (OuterVolumeSpecName: "kube-api-access-l6rhx") pod "dab1bdf5-00ae-422b-9edc-f663f448c46b" (UID: "dab1bdf5-00ae-422b-9edc-f663f448c46b"). InnerVolumeSpecName "kube-api-access-l6rhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.794934 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvdrh\" (UniqueName: \"kubernetes.io/projected/26565c7a-594f-4ca2-b6c9-ea0527c04619-kube-api-access-qvdrh\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.794980 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4snwk\" (UniqueName: \"kubernetes.io/projected/387ad041-1225-4993-a8bc-7d63648e123a-kube-api-access-4snwk\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.794994 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/387ad041-1225-4993-a8bc-7d63648e123a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.795008 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6djj\" (UniqueName: \"kubernetes.io/projected/d8e63a5d-4ddb-4c97-8204-bdd6342418bd-kube-api-access-l6djj\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.795025 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26565c7a-594f-4ca2-b6c9-ea0527c04619-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.795041 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8e63a5d-4ddb-4c97-8204-bdd6342418bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.795056 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6rhx\" (UniqueName: \"kubernetes.io/projected/dab1bdf5-00ae-422b-9edc-f663f448c46b-kube-api-access-l6rhx\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.795076 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dab1bdf5-00ae-422b-9edc-f663f448c46b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.795094 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv6fp\" (UniqueName: \"kubernetes.io/projected/2d70a7be-3789-4619-9e33-7b2d249345bd-kube-api-access-qv6fp\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.795106 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d70a7be-3789-4619-9e33-7b2d249345bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.903056 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t" event={"ID":"d8e63a5d-4ddb-4c97-8204-bdd6342418bd","Type":"ContainerDied","Data":"31a2129c16ec7374e4f2bba041eb76f49d5924710f21ce2ed7ae354197d4e721"} Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.903381 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a2129c16ec7374e4f2bba041eb76f49d5924710f21ce2ed7ae354197d4e721" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.903106 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.908469 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-r6npw" event={"ID":"5f18684e-d712-4eee-ae0c-e2030de0676b","Type":"ContainerDied","Data":"f2550cad8bd0fab0c0aac259ce20bdea94d8442cb8126a962476a245a7cc3a73"} Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.908532 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2550cad8bd0fab0c0aac259ce20bdea94d8442cb8126a962476a245a7cc3a73" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.908685 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-r6npw" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.914653 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" event={"ID":"dab1bdf5-00ae-422b-9edc-f663f448c46b","Type":"ContainerDied","Data":"fd0e93fc345fab66ea11733ceb28693112e97791c49d3640f221e0ea54b5e097"} Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.914704 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0e93fc345fab66ea11733ceb28693112e97791c49d3640f221e0ea54b5e097" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.914814 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-7118-account-create-update-gqwx9" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.918523 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-4df7t" event={"ID":"26565c7a-594f-4ca2-b6c9-ea0527c04619","Type":"ContainerDied","Data":"e18a8d4efc0a767099a8b27cd190976869b3fe0f3c5a00d32636d5cdf21df83a"} Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.918615 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e18a8d4efc0a767099a8b27cd190976869b3fe0f3c5a00d32636d5cdf21df83a" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.918755 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-4df7t" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.921265 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m" event={"ID":"387ad041-1225-4993-a8bc-7d63648e123a","Type":"ContainerDied","Data":"b9308ed4700186a7029e652351752f61f0a138a6c2d5df9670ed1cffabe91eca"} Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.921315 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9308ed4700186a7029e652351752f61f0a138a6c2d5df9670ed1cffabe91eca" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.921408 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.923198 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" event={"ID":"2d70a7be-3789-4619-9e33-7b2d249345bd","Type":"ContainerDied","Data":"17f38195ff93959391abf32e9c413959755fbf936c060d4c54a07ec7c69e4e92"} Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.923257 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17f38195ff93959391abf32e9c413959755fbf936c060d4c54a07ec7c69e4e92" Jan 30 17:27:07 crc kubenswrapper[4875]: I0130 17:27:07.923346 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-c6lw8" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.543708 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w"] Jan 30 17:27:13 crc kubenswrapper[4875]: E0130 17:27:13.544700 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab1bdf5-00ae-422b-9edc-f663f448c46b" containerName="mariadb-account-create-update" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.544718 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab1bdf5-00ae-422b-9edc-f663f448c46b" containerName="mariadb-account-create-update" Jan 30 17:27:13 crc kubenswrapper[4875]: E0130 17:27:13.544744 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f18684e-d712-4eee-ae0c-e2030de0676b" containerName="mariadb-database-create" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.544753 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f18684e-d712-4eee-ae0c-e2030de0676b" containerName="mariadb-database-create" Jan 30 17:27:13 crc kubenswrapper[4875]: E0130 17:27:13.544768 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d70a7be-3789-4619-9e33-7b2d249345bd" containerName="mariadb-database-create" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.544776 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d70a7be-3789-4619-9e33-7b2d249345bd" containerName="mariadb-database-create" Jan 30 17:27:13 crc kubenswrapper[4875]: E0130 17:27:13.544787 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e63a5d-4ddb-4c97-8204-bdd6342418bd" containerName="mariadb-account-create-update" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.544794 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e63a5d-4ddb-4c97-8204-bdd6342418bd" containerName="mariadb-account-create-update" Jan 30 17:27:13 crc kubenswrapper[4875]: E0130 17:27:13.544804 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26565c7a-594f-4ca2-b6c9-ea0527c04619" containerName="mariadb-database-create" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.544811 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="26565c7a-594f-4ca2-b6c9-ea0527c04619" containerName="mariadb-database-create" Jan 30 17:27:13 crc kubenswrapper[4875]: E0130 17:27:13.544825 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387ad041-1225-4993-a8bc-7d63648e123a" containerName="mariadb-account-create-update" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.544833 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="387ad041-1225-4993-a8bc-7d63648e123a" containerName="mariadb-account-create-update" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.545015 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d70a7be-3789-4619-9e33-7b2d249345bd" containerName="mariadb-database-create" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.545031 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab1bdf5-00ae-422b-9edc-f663f448c46b" containerName="mariadb-account-create-update" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.545041 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f18684e-d712-4eee-ae0c-e2030de0676b" containerName="mariadb-database-create" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.545055 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8e63a5d-4ddb-4c97-8204-bdd6342418bd" containerName="mariadb-account-create-update" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.545072 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="387ad041-1225-4993-a8bc-7d63648e123a" containerName="mariadb-account-create-update" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.545085 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="26565c7a-594f-4ca2-b6c9-ea0527c04619" containerName="mariadb-database-create" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.545764 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.548432 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.548980 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-fjg4m" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.549837 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.552411 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w"] Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.685936 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cebc96df-af7e-409f-94ea-aaa530661527-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-rrm2w\" (UID: \"cebc96df-af7e-409f-94ea-aaa530661527\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.686032 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s62zv\" (UniqueName: \"kubernetes.io/projected/cebc96df-af7e-409f-94ea-aaa530661527-kube-api-access-s62zv\") pod \"nova-kuttl-cell0-conductor-db-sync-rrm2w\" (UID: \"cebc96df-af7e-409f-94ea-aaa530661527\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.686175 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cebc96df-af7e-409f-94ea-aaa530661527-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-rrm2w\" (UID: \"cebc96df-af7e-409f-94ea-aaa530661527\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.787749 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cebc96df-af7e-409f-94ea-aaa530661527-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-rrm2w\" (UID: \"cebc96df-af7e-409f-94ea-aaa530661527\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.787863 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s62zv\" (UniqueName: \"kubernetes.io/projected/cebc96df-af7e-409f-94ea-aaa530661527-kube-api-access-s62zv\") pod \"nova-kuttl-cell0-conductor-db-sync-rrm2w\" (UID: \"cebc96df-af7e-409f-94ea-aaa530661527\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.787965 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cebc96df-af7e-409f-94ea-aaa530661527-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-rrm2w\" (UID: \"cebc96df-af7e-409f-94ea-aaa530661527\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.794152 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cebc96df-af7e-409f-94ea-aaa530661527-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-rrm2w\" (UID: \"cebc96df-af7e-409f-94ea-aaa530661527\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.799106 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cebc96df-af7e-409f-94ea-aaa530661527-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-rrm2w\" (UID: \"cebc96df-af7e-409f-94ea-aaa530661527\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.806300 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s62zv\" (UniqueName: \"kubernetes.io/projected/cebc96df-af7e-409f-94ea-aaa530661527-kube-api-access-s62zv\") pod \"nova-kuttl-cell0-conductor-db-sync-rrm2w\" (UID: \"cebc96df-af7e-409f-94ea-aaa530661527\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" Jan 30 17:27:13 crc kubenswrapper[4875]: I0130 17:27:13.862397 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" Jan 30 17:27:14 crc kubenswrapper[4875]: I0130 17:27:14.288076 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w"] Jan 30 17:27:14 crc kubenswrapper[4875]: I0130 17:27:14.980205 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" event={"ID":"cebc96df-af7e-409f-94ea-aaa530661527","Type":"ContainerStarted","Data":"e63343d8e7d1b1b510ca26306da702264278c4cb9e3a6e9f3c45d989ecaca591"} Jan 30 17:27:14 crc kubenswrapper[4875]: I0130 17:27:14.980503 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" event={"ID":"cebc96df-af7e-409f-94ea-aaa530661527","Type":"ContainerStarted","Data":"300923c4d2f90edf47faed50f97e570390f836bf1336216cb62cff0fd32c0cee"} Jan 30 17:27:14 crc kubenswrapper[4875]: I0130 17:27:14.998861 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" podStartSLOduration=1.998840097 podStartE2EDuration="1.998840097s" podCreationTimestamp="2026-01-30 17:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:14.996324601 +0000 UTC m=+1845.543687984" watchObservedRunningTime="2026-01-30 17:27:14.998840097 +0000 UTC m=+1845.546203480" Jan 30 17:27:20 crc kubenswrapper[4875]: I0130 17:27:20.015204 4875 generic.go:334] "Generic (PLEG): container finished" podID="cebc96df-af7e-409f-94ea-aaa530661527" containerID="e63343d8e7d1b1b510ca26306da702264278c4cb9e3a6e9f3c45d989ecaca591" exitCode=0 Jan 30 17:27:20 crc kubenswrapper[4875]: I0130 17:27:20.015291 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" event={"ID":"cebc96df-af7e-409f-94ea-aaa530661527","Type":"ContainerDied","Data":"e63343d8e7d1b1b510ca26306da702264278c4cb9e3a6e9f3c45d989ecaca591"} Jan 30 17:27:21 crc kubenswrapper[4875]: I0130 17:27:21.325651 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" Jan 30 17:27:21 crc kubenswrapper[4875]: I0130 17:27:21.403546 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cebc96df-af7e-409f-94ea-aaa530661527-scripts\") pod \"cebc96df-af7e-409f-94ea-aaa530661527\" (UID: \"cebc96df-af7e-409f-94ea-aaa530661527\") " Jan 30 17:27:21 crc kubenswrapper[4875]: I0130 17:27:21.403615 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s62zv\" (UniqueName: \"kubernetes.io/projected/cebc96df-af7e-409f-94ea-aaa530661527-kube-api-access-s62zv\") pod \"cebc96df-af7e-409f-94ea-aaa530661527\" (UID: \"cebc96df-af7e-409f-94ea-aaa530661527\") " Jan 30 17:27:21 crc kubenswrapper[4875]: I0130 17:27:21.403669 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cebc96df-af7e-409f-94ea-aaa530661527-config-data\") pod \"cebc96df-af7e-409f-94ea-aaa530661527\" (UID: \"cebc96df-af7e-409f-94ea-aaa530661527\") " Jan 30 17:27:21 crc kubenswrapper[4875]: I0130 17:27:21.409028 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cebc96df-af7e-409f-94ea-aaa530661527-kube-api-access-s62zv" (OuterVolumeSpecName: "kube-api-access-s62zv") pod "cebc96df-af7e-409f-94ea-aaa530661527" (UID: "cebc96df-af7e-409f-94ea-aaa530661527"). InnerVolumeSpecName "kube-api-access-s62zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:27:21 crc kubenswrapper[4875]: I0130 17:27:21.410834 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cebc96df-af7e-409f-94ea-aaa530661527-scripts" (OuterVolumeSpecName: "scripts") pod "cebc96df-af7e-409f-94ea-aaa530661527" (UID: "cebc96df-af7e-409f-94ea-aaa530661527"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:27:21 crc kubenswrapper[4875]: I0130 17:27:21.424607 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cebc96df-af7e-409f-94ea-aaa530661527-config-data" (OuterVolumeSpecName: "config-data") pod "cebc96df-af7e-409f-94ea-aaa530661527" (UID: "cebc96df-af7e-409f-94ea-aaa530661527"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:27:21 crc kubenswrapper[4875]: I0130 17:27:21.504952 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cebc96df-af7e-409f-94ea-aaa530661527-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:21 crc kubenswrapper[4875]: I0130 17:27:21.504987 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s62zv\" (UniqueName: \"kubernetes.io/projected/cebc96df-af7e-409f-94ea-aaa530661527-kube-api-access-s62zv\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:21 crc kubenswrapper[4875]: I0130 17:27:21.504997 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cebc96df-af7e-409f-94ea-aaa530661527-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.033912 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" event={"ID":"cebc96df-af7e-409f-94ea-aaa530661527","Type":"ContainerDied","Data":"300923c4d2f90edf47faed50f97e570390f836bf1336216cb62cff0fd32c0cee"} Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.034157 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="300923c4d2f90edf47faed50f97e570390f836bf1336216cb62cff0fd32c0cee" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.034016 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.103853 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:27:22 crc kubenswrapper[4875]: E0130 17:27:22.104165 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cebc96df-af7e-409f-94ea-aaa530661527" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.104180 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="cebc96df-af7e-409f-94ea-aaa530661527" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.104312 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="cebc96df-af7e-409f-94ea-aaa530661527" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.104808 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.107229 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-fjg4m" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.108363 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.118108 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.214915 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sms9r\" (UniqueName: \"kubernetes.io/projected/3789d70d-0e1c-44e9-91f5-86c2c3dc4a33-kube-api-access-sms9r\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.214953 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3789d70d-0e1c-44e9-91f5-86c2c3dc4a33-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.316445 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sms9r\" (UniqueName: \"kubernetes.io/projected/3789d70d-0e1c-44e9-91f5-86c2c3dc4a33-kube-api-access-sms9r\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.316498 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3789d70d-0e1c-44e9-91f5-86c2c3dc4a33-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.321237 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3789d70d-0e1c-44e9-91f5-86c2c3dc4a33-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.338392 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sms9r\" (UniqueName: \"kubernetes.io/projected/3789d70d-0e1c-44e9-91f5-86c2c3dc4a33-kube-api-access-sms9r\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.422637 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:27:22 crc kubenswrapper[4875]: I0130 17:27:22.858389 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:27:23 crc kubenswrapper[4875]: I0130 17:27:23.042102 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33","Type":"ContainerStarted","Data":"8a8f401632fef95a064475357d5959f9c0c8dbfe6c0ca9be3c05db65a1fb1bf5"} Jan 30 17:27:24 crc kubenswrapper[4875]: I0130 17:27:24.052830 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33","Type":"ContainerStarted","Data":"a478c42709c6655749ea65795f5769f0aa2abb94d90394203f03e63050784459"} Jan 30 17:27:24 crc kubenswrapper[4875]: I0130 17:27:24.053860 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:27:24 crc kubenswrapper[4875]: I0130 17:27:24.070221 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.070199295 podStartE2EDuration="2.070199295s" podCreationTimestamp="2026-01-30 17:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:24.067729321 +0000 UTC m=+1854.615092714" watchObservedRunningTime="2026-01-30 17:27:24.070199295 +0000 UTC m=+1854.617562678" Jan 30 17:27:32 crc kubenswrapper[4875]: I0130 17:27:32.351271 4875 scope.go:117] "RemoveContainer" containerID="5fd1203d63452140b67d813d8ae19a230a52832b27f87a8f7c01d200ca8bfee3" Jan 30 17:27:32 crc kubenswrapper[4875]: I0130 17:27:32.449600 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:27:32 crc kubenswrapper[4875]: I0130 17:27:32.844223 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76"] Jan 30 17:27:32 crc kubenswrapper[4875]: I0130 17:27:32.845175 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" Jan 30 17:27:32 crc kubenswrapper[4875]: I0130 17:27:32.853471 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 30 17:27:32 crc kubenswrapper[4875]: I0130 17:27:32.853867 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 30 17:27:32 crc kubenswrapper[4875]: I0130 17:27:32.861396 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76"] Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.000572 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0579fff9-2e84-4cb6-8a96-08144cfecf05-scripts\") pod \"nova-kuttl-cell0-cell-mapping-5jz76\" (UID: \"0579fff9-2e84-4cb6-8a96-08144cfecf05\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.000896 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpf5t\" (UniqueName: \"kubernetes.io/projected/0579fff9-2e84-4cb6-8a96-08144cfecf05-kube-api-access-gpf5t\") pod \"nova-kuttl-cell0-cell-mapping-5jz76\" (UID: \"0579fff9-2e84-4cb6-8a96-08144cfecf05\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.000986 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0579fff9-2e84-4cb6-8a96-08144cfecf05-config-data\") pod \"nova-kuttl-cell0-cell-mapping-5jz76\" (UID: \"0579fff9-2e84-4cb6-8a96-08144cfecf05\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.011155 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.012345 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.014976 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.027476 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.099192 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.101260 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.102517 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0579fff9-2e84-4cb6-8a96-08144cfecf05-scripts\") pod \"nova-kuttl-cell0-cell-mapping-5jz76\" (UID: \"0579fff9-2e84-4cb6-8a96-08144cfecf05\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.102604 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5mvj\" (UniqueName: \"kubernetes.io/projected/455921c1-b5b6-42e8-b050-920a49161c06-kube-api-access-v5mvj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"455921c1-b5b6-42e8-b050-920a49161c06\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.102641 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpf5t\" (UniqueName: \"kubernetes.io/projected/0579fff9-2e84-4cb6-8a96-08144cfecf05-kube-api-access-gpf5t\") pod \"nova-kuttl-cell0-cell-mapping-5jz76\" (UID: \"0579fff9-2e84-4cb6-8a96-08144cfecf05\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.102698 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0579fff9-2e84-4cb6-8a96-08144cfecf05-config-data\") pod \"nova-kuttl-cell0-cell-mapping-5jz76\" (UID: \"0579fff9-2e84-4cb6-8a96-08144cfecf05\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.102750 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455921c1-b5b6-42e8-b050-920a49161c06-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"455921c1-b5b6-42e8-b050-920a49161c06\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.108411 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0579fff9-2e84-4cb6-8a96-08144cfecf05-scripts\") pod \"nova-kuttl-cell0-cell-mapping-5jz76\" (UID: \"0579fff9-2e84-4cb6-8a96-08144cfecf05\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.113427 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.116501 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0579fff9-2e84-4cb6-8a96-08144cfecf05-config-data\") pod \"nova-kuttl-cell0-cell-mapping-5jz76\" (UID: \"0579fff9-2e84-4cb6-8a96-08144cfecf05\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.122939 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.124297 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.128121 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.133146 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpf5t\" (UniqueName: \"kubernetes.io/projected/0579fff9-2e84-4cb6-8a96-08144cfecf05-kube-api-access-gpf5t\") pod \"nova-kuttl-cell0-cell-mapping-5jz76\" (UID: \"0579fff9-2e84-4cb6-8a96-08144cfecf05\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.133338 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.142018 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.184578 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.204495 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455921c1-b5b6-42e8-b050-920a49161c06-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"455921c1-b5b6-42e8-b050-920a49161c06\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.204544 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj45l\" (UniqueName: \"kubernetes.io/projected/3e86b1a7-9d16-4f4e-99f2-70d7d4819d83-kube-api-access-tj45l\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.204620 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-logs\") pod \"nova-kuttl-api-0\" (UID: \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.204638 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.204676 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5mvj\" (UniqueName: \"kubernetes.io/projected/455921c1-b5b6-42e8-b050-920a49161c06-kube-api-access-v5mvj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"455921c1-b5b6-42e8-b050-920a49161c06\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.204694 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjk7q\" (UniqueName: \"kubernetes.io/projected/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-kube-api-access-wjk7q\") pod \"nova-kuttl-api-0\" (UID: \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.204738 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e86b1a7-9d16-4f4e-99f2-70d7d4819d83-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.224663 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5mvj\" (UniqueName: \"kubernetes.io/projected/455921c1-b5b6-42e8-b050-920a49161c06-kube-api-access-v5mvj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"455921c1-b5b6-42e8-b050-920a49161c06\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.231471 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455921c1-b5b6-42e8-b050-920a49161c06-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"455921c1-b5b6-42e8-b050-920a49161c06\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.306169 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e86b1a7-9d16-4f4e-99f2-70d7d4819d83-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.306282 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj45l\" (UniqueName: \"kubernetes.io/projected/3e86b1a7-9d16-4f4e-99f2-70d7d4819d83-kube-api-access-tj45l\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.306363 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-logs\") pod \"nova-kuttl-api-0\" (UID: \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.306385 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.306441 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjk7q\" (UniqueName: \"kubernetes.io/projected/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-kube-api-access-wjk7q\") pod \"nova-kuttl-api-0\" (UID: \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.309163 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-logs\") pod \"nova-kuttl-api-0\" (UID: \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.312098 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e86b1a7-9d16-4f4e-99f2-70d7d4819d83-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.314648 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.317126 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.318710 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.328050 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.338624 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.339696 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.349123 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjk7q\" (UniqueName: \"kubernetes.io/projected/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-kube-api-access-wjk7q\") pod \"nova-kuttl-api-0\" (UID: \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.354553 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj45l\" (UniqueName: \"kubernetes.io/projected/3e86b1a7-9d16-4f4e-99f2-70d7d4819d83-kube-api-access-tj45l\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.407550 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8be4a12e-9d3b-45c6-b5be-04b23795459e-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8be4a12e-9d3b-45c6-b5be-04b23795459e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.407614 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be4a12e-9d3b-45c6-b5be-04b23795459e-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8be4a12e-9d3b-45c6-b5be-04b23795459e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.408121 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwlsd\" (UniqueName: \"kubernetes.io/projected/8be4a12e-9d3b-45c6-b5be-04b23795459e-kube-api-access-qwlsd\") pod \"nova-kuttl-metadata-0\" (UID: \"8be4a12e-9d3b-45c6-b5be-04b23795459e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.494068 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.507676 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.509549 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwlsd\" (UniqueName: \"kubernetes.io/projected/8be4a12e-9d3b-45c6-b5be-04b23795459e-kube-api-access-qwlsd\") pod \"nova-kuttl-metadata-0\" (UID: \"8be4a12e-9d3b-45c6-b5be-04b23795459e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.509704 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8be4a12e-9d3b-45c6-b5be-04b23795459e-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8be4a12e-9d3b-45c6-b5be-04b23795459e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.509766 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be4a12e-9d3b-45c6-b5be-04b23795459e-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8be4a12e-9d3b-45c6-b5be-04b23795459e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.510202 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8be4a12e-9d3b-45c6-b5be-04b23795459e-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8be4a12e-9d3b-45c6-b5be-04b23795459e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.514719 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be4a12e-9d3b-45c6-b5be-04b23795459e-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8be4a12e-9d3b-45c6-b5be-04b23795459e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.540438 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwlsd\" (UniqueName: \"kubernetes.io/projected/8be4a12e-9d3b-45c6-b5be-04b23795459e-kube-api-access-qwlsd\") pod \"nova-kuttl-metadata-0\" (UID: \"8be4a12e-9d3b-45c6-b5be-04b23795459e\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.657534 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.714097 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76"] Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.799773 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.829762 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:27:33 crc kubenswrapper[4875]: W0130 17:27:33.831854 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02521e5a_0702_44ab_a1c2_81f6dfe3eb3a.slice/crio-f4827f26dd1002e42a57c56fe02149d5efaa510d9fe3b437a88ce5194b10c238 WatchSource:0}: Error finding container f4827f26dd1002e42a57c56fe02149d5efaa510d9fe3b437a88ce5194b10c238: Status 404 returned error can't find the container with id f4827f26dd1002e42a57c56fe02149d5efaa510d9fe3b437a88ce5194b10c238 Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.904645 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7"] Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.905767 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.915081 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 30 17:27:33 crc kubenswrapper[4875]: I0130 17:27:33.915095 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.019380 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/337d6735-5e62-440f-80dd-78cfee827806-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-95xf7\" (UID: \"337d6735-5e62-440f-80dd-78cfee827806\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.019436 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sfh5\" (UniqueName: \"kubernetes.io/projected/337d6735-5e62-440f-80dd-78cfee827806-kube-api-access-2sfh5\") pod \"nova-kuttl-cell1-conductor-db-sync-95xf7\" (UID: \"337d6735-5e62-440f-80dd-78cfee827806\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.019497 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/337d6735-5e62-440f-80dd-78cfee827806-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-95xf7\" (UID: \"337d6735-5e62-440f-80dd-78cfee827806\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.062652 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.078704 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7"] Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.120991 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/337d6735-5e62-440f-80dd-78cfee827806-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-95xf7\" (UID: \"337d6735-5e62-440f-80dd-78cfee827806\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.121140 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sfh5\" (UniqueName: \"kubernetes.io/projected/337d6735-5e62-440f-80dd-78cfee827806-kube-api-access-2sfh5\") pod \"nova-kuttl-cell1-conductor-db-sync-95xf7\" (UID: \"337d6735-5e62-440f-80dd-78cfee827806\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.121296 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/337d6735-5e62-440f-80dd-78cfee827806-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-95xf7\" (UID: \"337d6735-5e62-440f-80dd-78cfee827806\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.127569 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/337d6735-5e62-440f-80dd-78cfee827806-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-95xf7\" (UID: \"337d6735-5e62-440f-80dd-78cfee827806\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.128016 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/337d6735-5e62-440f-80dd-78cfee827806-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-95xf7\" (UID: \"337d6735-5e62-440f-80dd-78cfee827806\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.144216 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sfh5\" (UniqueName: \"kubernetes.io/projected/337d6735-5e62-440f-80dd-78cfee827806-kube-api-access-2sfh5\") pod \"nova-kuttl-cell1-conductor-db-sync-95xf7\" (UID: \"337d6735-5e62-440f-80dd-78cfee827806\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.147227 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83","Type":"ContainerStarted","Data":"e37a8f36ad78db36f30515b7265afd29f638dbb37e2198fa674dbaf543cf6ec3"} Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.147264 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a","Type":"ContainerStarted","Data":"f4827f26dd1002e42a57c56fe02149d5efaa510d9fe3b437a88ce5194b10c238"} Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.148918 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" event={"ID":"0579fff9-2e84-4cb6-8a96-08144cfecf05","Type":"ContainerStarted","Data":"3b511c3030636492948ee48006a2639554a118a0c15da56fc7a69808f5531de0"} Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.150250 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"455921c1-b5b6-42e8-b050-920a49161c06","Type":"ContainerStarted","Data":"87c14020b44158d01aeda0522715fdc0896fedd2fb5df044a2b86382d8b702d7"} Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.162878 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.311906 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:27:34 crc kubenswrapper[4875]: W0130 17:27:34.328955 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8be4a12e_9d3b_45c6_b5be_04b23795459e.slice/crio-02cf2764f80f76ba1103bb503ea70e4b097b83504103c0722ff27ea2f78be2ed WatchSource:0}: Error finding container 02cf2764f80f76ba1103bb503ea70e4b097b83504103c0722ff27ea2f78be2ed: Status 404 returned error can't find the container with id 02cf2764f80f76ba1103bb503ea70e4b097b83504103c0722ff27ea2f78be2ed Jan 30 17:27:34 crc kubenswrapper[4875]: I0130 17:27:34.443976 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7"] Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.160753 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a","Type":"ContainerStarted","Data":"7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a"} Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.160799 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a","Type":"ContainerStarted","Data":"3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39"} Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.174428 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" event={"ID":"337d6735-5e62-440f-80dd-78cfee827806","Type":"ContainerStarted","Data":"2be2a9e37c333e0f75cad0d6af4d18570a560f5bfe64aa3694964dfcb1112503"} Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.174487 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" event={"ID":"337d6735-5e62-440f-80dd-78cfee827806","Type":"ContainerStarted","Data":"c0870add6afd6e5809be99f5447a8bee642f9f7a74c6e245352fe1fd74da4da0"} Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.184246 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" event={"ID":"0579fff9-2e84-4cb6-8a96-08144cfecf05","Type":"ContainerStarted","Data":"967fd9e64f6903e19dee956b5e2fe5943168c04ecb829e537394f3feee298eba"} Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.194958 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8be4a12e-9d3b-45c6-b5be-04b23795459e","Type":"ContainerStarted","Data":"d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67"} Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.195021 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8be4a12e-9d3b-45c6-b5be-04b23795459e","Type":"ContainerStarted","Data":"a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63"} Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.195035 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8be4a12e-9d3b-45c6-b5be-04b23795459e","Type":"ContainerStarted","Data":"02cf2764f80f76ba1103bb503ea70e4b097b83504103c0722ff27ea2f78be2ed"} Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.195820 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.1957948 podStartE2EDuration="2.1957948s" podCreationTimestamp="2026-01-30 17:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:35.183471519 +0000 UTC m=+1865.730834902" watchObservedRunningTime="2026-01-30 17:27:35.1957948 +0000 UTC m=+1865.743158213" Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.196865 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"455921c1-b5b6-42e8-b050-920a49161c06","Type":"ContainerStarted","Data":"65042108355da07156c563cba5f86beb047c983b4d734a6d89638aa3420b2e31"} Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.207166 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83","Type":"ContainerStarted","Data":"f57c50d3733129d489e5e9c939ee00893b5c45783c7f529140619e45d5f35e93"} Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.209111 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" podStartSLOduration=2.209095332 podStartE2EDuration="2.209095332s" podCreationTimestamp="2026-01-30 17:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:35.203618956 +0000 UTC m=+1865.750982359" watchObservedRunningTime="2026-01-30 17:27:35.209095332 +0000 UTC m=+1865.756458715" Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.228954 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=3.22893464 podStartE2EDuration="3.22893464s" podCreationTimestamp="2026-01-30 17:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:35.220730953 +0000 UTC m=+1865.768094356" watchObservedRunningTime="2026-01-30 17:27:35.22893464 +0000 UTC m=+1865.776298023" Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.244698 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" podStartSLOduration=3.244682846 podStartE2EDuration="3.244682846s" podCreationTimestamp="2026-01-30 17:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:35.237810908 +0000 UTC m=+1865.785174291" watchObservedRunningTime="2026-01-30 17:27:35.244682846 +0000 UTC m=+1865.792046239" Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.259481 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.259456092 podStartE2EDuration="2.259456092s" podCreationTimestamp="2026-01-30 17:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:35.258530954 +0000 UTC m=+1865.805894377" watchObservedRunningTime="2026-01-30 17:27:35.259456092 +0000 UTC m=+1865.806819495" Jan 30 17:27:35 crc kubenswrapper[4875]: I0130 17:27:35.278470 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.278444945 podStartE2EDuration="2.278444945s" podCreationTimestamp="2026-01-30 17:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:35.274535467 +0000 UTC m=+1865.821898860" watchObservedRunningTime="2026-01-30 17:27:35.278444945 +0000 UTC m=+1865.825808338" Jan 30 17:27:37 crc kubenswrapper[4875]: I0130 17:27:37.231967 4875 generic.go:334] "Generic (PLEG): container finished" podID="337d6735-5e62-440f-80dd-78cfee827806" containerID="2be2a9e37c333e0f75cad0d6af4d18570a560f5bfe64aa3694964dfcb1112503" exitCode=0 Jan 30 17:27:37 crc kubenswrapper[4875]: I0130 17:27:37.232056 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" event={"ID":"337d6735-5e62-440f-80dd-78cfee827806","Type":"ContainerDied","Data":"2be2a9e37c333e0f75cad0d6af4d18570a560f5bfe64aa3694964dfcb1112503"} Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.347304 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.508394 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.592681 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.658182 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.660863 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.702228 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sfh5\" (UniqueName: \"kubernetes.io/projected/337d6735-5e62-440f-80dd-78cfee827806-kube-api-access-2sfh5\") pod \"337d6735-5e62-440f-80dd-78cfee827806\" (UID: \"337d6735-5e62-440f-80dd-78cfee827806\") " Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.702654 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/337d6735-5e62-440f-80dd-78cfee827806-scripts\") pod \"337d6735-5e62-440f-80dd-78cfee827806\" (UID: \"337d6735-5e62-440f-80dd-78cfee827806\") " Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.702792 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/337d6735-5e62-440f-80dd-78cfee827806-config-data\") pod \"337d6735-5e62-440f-80dd-78cfee827806\" (UID: \"337d6735-5e62-440f-80dd-78cfee827806\") " Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.709905 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/337d6735-5e62-440f-80dd-78cfee827806-kube-api-access-2sfh5" (OuterVolumeSpecName: "kube-api-access-2sfh5") pod "337d6735-5e62-440f-80dd-78cfee827806" (UID: "337d6735-5e62-440f-80dd-78cfee827806"). InnerVolumeSpecName "kube-api-access-2sfh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.710470 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/337d6735-5e62-440f-80dd-78cfee827806-scripts" (OuterVolumeSpecName: "scripts") pod "337d6735-5e62-440f-80dd-78cfee827806" (UID: "337d6735-5e62-440f-80dd-78cfee827806"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.733565 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/337d6735-5e62-440f-80dd-78cfee827806-config-data" (OuterVolumeSpecName: "config-data") pod "337d6735-5e62-440f-80dd-78cfee827806" (UID: "337d6735-5e62-440f-80dd-78cfee827806"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.804613 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sfh5\" (UniqueName: \"kubernetes.io/projected/337d6735-5e62-440f-80dd-78cfee827806-kube-api-access-2sfh5\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.804650 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/337d6735-5e62-440f-80dd-78cfee827806-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:38 crc kubenswrapper[4875]: I0130 17:27:38.804661 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/337d6735-5e62-440f-80dd-78cfee827806-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.254314 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" event={"ID":"337d6735-5e62-440f-80dd-78cfee827806","Type":"ContainerDied","Data":"c0870add6afd6e5809be99f5447a8bee642f9f7a74c6e245352fe1fd74da4da0"} Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.254372 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0870add6afd6e5809be99f5447a8bee642f9f7a74c6e245352fe1fd74da4da0" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.255090 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.257378 4875 generic.go:334] "Generic (PLEG): container finished" podID="0579fff9-2e84-4cb6-8a96-08144cfecf05" containerID="967fd9e64f6903e19dee956b5e2fe5943168c04ecb829e537394f3feee298eba" exitCode=0 Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.257497 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" event={"ID":"0579fff9-2e84-4cb6-8a96-08144cfecf05","Type":"ContainerDied","Data":"967fd9e64f6903e19dee956b5e2fe5943168c04ecb829e537394f3feee298eba"} Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.332996 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:27:39 crc kubenswrapper[4875]: E0130 17:27:39.333428 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="337d6735-5e62-440f-80dd-78cfee827806" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.333445 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="337d6735-5e62-440f-80dd-78cfee827806" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.333675 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="337d6735-5e62-440f-80dd-78cfee827806" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.334349 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.339102 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.344491 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.414005 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9h5p\" (UniqueName: \"kubernetes.io/projected/08294a73-b9f7-404e-b0fa-7d5b85501c39-kube-api-access-g9h5p\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"08294a73-b9f7-404e-b0fa-7d5b85501c39\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.414142 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08294a73-b9f7-404e-b0fa-7d5b85501c39-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"08294a73-b9f7-404e-b0fa-7d5b85501c39\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.514957 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08294a73-b9f7-404e-b0fa-7d5b85501c39-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"08294a73-b9f7-404e-b0fa-7d5b85501c39\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.515028 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9h5p\" (UniqueName: \"kubernetes.io/projected/08294a73-b9f7-404e-b0fa-7d5b85501c39-kube-api-access-g9h5p\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"08294a73-b9f7-404e-b0fa-7d5b85501c39\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.520453 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08294a73-b9f7-404e-b0fa-7d5b85501c39-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"08294a73-b9f7-404e-b0fa-7d5b85501c39\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.534103 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9h5p\" (UniqueName: \"kubernetes.io/projected/08294a73-b9f7-404e-b0fa-7d5b85501c39-kube-api-access-g9h5p\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"08294a73-b9f7-404e-b0fa-7d5b85501c39\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:27:39 crc kubenswrapper[4875]: I0130 17:27:39.652418 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:27:40 crc kubenswrapper[4875]: I0130 17:27:40.080775 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:27:40 crc kubenswrapper[4875]: W0130 17:27:40.085802 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08294a73_b9f7_404e_b0fa_7d5b85501c39.slice/crio-6ee89fe1957fecd926cf196c0fea6bb996f4e6e2b923bf360264b13d4fae7a63 WatchSource:0}: Error finding container 6ee89fe1957fecd926cf196c0fea6bb996f4e6e2b923bf360264b13d4fae7a63: Status 404 returned error can't find the container with id 6ee89fe1957fecd926cf196c0fea6bb996f4e6e2b923bf360264b13d4fae7a63 Jan 30 17:27:40 crc kubenswrapper[4875]: I0130 17:27:40.265344 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"08294a73-b9f7-404e-b0fa-7d5b85501c39","Type":"ContainerStarted","Data":"6ee89fe1957fecd926cf196c0fea6bb996f4e6e2b923bf360264b13d4fae7a63"} Jan 30 17:27:40 crc kubenswrapper[4875]: I0130 17:27:40.504996 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" Jan 30 17:27:40 crc kubenswrapper[4875]: I0130 17:27:40.532511 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0579fff9-2e84-4cb6-8a96-08144cfecf05-config-data\") pod \"0579fff9-2e84-4cb6-8a96-08144cfecf05\" (UID: \"0579fff9-2e84-4cb6-8a96-08144cfecf05\") " Jan 30 17:27:40 crc kubenswrapper[4875]: I0130 17:27:40.532649 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0579fff9-2e84-4cb6-8a96-08144cfecf05-scripts\") pod \"0579fff9-2e84-4cb6-8a96-08144cfecf05\" (UID: \"0579fff9-2e84-4cb6-8a96-08144cfecf05\") " Jan 30 17:27:40 crc kubenswrapper[4875]: I0130 17:27:40.532680 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpf5t\" (UniqueName: \"kubernetes.io/projected/0579fff9-2e84-4cb6-8a96-08144cfecf05-kube-api-access-gpf5t\") pod \"0579fff9-2e84-4cb6-8a96-08144cfecf05\" (UID: \"0579fff9-2e84-4cb6-8a96-08144cfecf05\") " Jan 30 17:27:40 crc kubenswrapper[4875]: I0130 17:27:40.541752 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0579fff9-2e84-4cb6-8a96-08144cfecf05-scripts" (OuterVolumeSpecName: "scripts") pod "0579fff9-2e84-4cb6-8a96-08144cfecf05" (UID: "0579fff9-2e84-4cb6-8a96-08144cfecf05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:27:40 crc kubenswrapper[4875]: I0130 17:27:40.544444 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0579fff9-2e84-4cb6-8a96-08144cfecf05-kube-api-access-gpf5t" (OuterVolumeSpecName: "kube-api-access-gpf5t") pod "0579fff9-2e84-4cb6-8a96-08144cfecf05" (UID: "0579fff9-2e84-4cb6-8a96-08144cfecf05"). InnerVolumeSpecName "kube-api-access-gpf5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:27:40 crc kubenswrapper[4875]: I0130 17:27:40.570260 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0579fff9-2e84-4cb6-8a96-08144cfecf05-config-data" (OuterVolumeSpecName: "config-data") pod "0579fff9-2e84-4cb6-8a96-08144cfecf05" (UID: "0579fff9-2e84-4cb6-8a96-08144cfecf05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:27:40 crc kubenswrapper[4875]: I0130 17:27:40.634996 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0579fff9-2e84-4cb6-8a96-08144cfecf05-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:40 crc kubenswrapper[4875]: I0130 17:27:40.635034 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0579fff9-2e84-4cb6-8a96-08144cfecf05-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:40 crc kubenswrapper[4875]: I0130 17:27:40.635044 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpf5t\" (UniqueName: \"kubernetes.io/projected/0579fff9-2e84-4cb6-8a96-08144cfecf05-kube-api-access-gpf5t\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.275637 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" event={"ID":"0579fff9-2e84-4cb6-8a96-08144cfecf05","Type":"ContainerDied","Data":"3b511c3030636492948ee48006a2639554a118a0c15da56fc7a69808f5531de0"} Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.276015 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b511c3030636492948ee48006a2639554a118a0c15da56fc7a69808f5531de0" Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.275688 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76" Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.277270 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"08294a73-b9f7-404e-b0fa-7d5b85501c39","Type":"ContainerStarted","Data":"e10daa96c1106b7ea170767a20b18aebde5401981cf718737782da991d9a294f"} Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.277421 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.306984 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.306961162 podStartE2EDuration="2.306961162s" podCreationTimestamp="2026-01-30 17:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:41.29364596 +0000 UTC m=+1871.841009343" watchObservedRunningTime="2026-01-30 17:27:41.306961162 +0000 UTC m=+1871.854324565" Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.464349 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.464604 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" containerName="nova-kuttl-api-log" containerID="cri-o://3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39" gracePeriod=30 Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.464732 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" containerName="nova-kuttl-api-api" containerID="cri-o://7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a" gracePeriod=30 Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.491160 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.491346 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3e86b1a7-9d16-4f4e-99f2-70d7d4819d83" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://f57c50d3733129d489e5e9c939ee00893b5c45783c7f529140619e45d5f35e93" gracePeriod=30 Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.523574 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.523840 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8be4a12e-9d3b-45c6-b5be-04b23795459e" containerName="nova-kuttl-metadata-log" containerID="cri-o://a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63" gracePeriod=30 Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.523993 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8be4a12e-9d3b-45c6-b5be-04b23795459e" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67" gracePeriod=30 Jan 30 17:27:41 crc kubenswrapper[4875]: I0130 17:27:41.971203 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.061089 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-logs\") pod \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\" (UID: \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\") " Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.061161 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-config-data\") pod \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\" (UID: \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\") " Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.061232 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjk7q\" (UniqueName: \"kubernetes.io/projected/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-kube-api-access-wjk7q\") pod \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\" (UID: \"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a\") " Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.062060 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-logs" (OuterVolumeSpecName: "logs") pod "02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" (UID: "02521e5a-0702-44ab-a1c2-81f6dfe3eb3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.069921 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-kube-api-access-wjk7q" (OuterVolumeSpecName: "kube-api-access-wjk7q") pod "02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" (UID: "02521e5a-0702-44ab-a1c2-81f6dfe3eb3a"). InnerVolumeSpecName "kube-api-access-wjk7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.081560 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.087179 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-config-data" (OuterVolumeSpecName: "config-data") pod "02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" (UID: "02521e5a-0702-44ab-a1c2-81f6dfe3eb3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.162195 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwlsd\" (UniqueName: \"kubernetes.io/projected/8be4a12e-9d3b-45c6-b5be-04b23795459e-kube-api-access-qwlsd\") pod \"8be4a12e-9d3b-45c6-b5be-04b23795459e\" (UID: \"8be4a12e-9d3b-45c6-b5be-04b23795459e\") " Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.162721 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be4a12e-9d3b-45c6-b5be-04b23795459e-config-data\") pod \"8be4a12e-9d3b-45c6-b5be-04b23795459e\" (UID: \"8be4a12e-9d3b-45c6-b5be-04b23795459e\") " Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.162871 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8be4a12e-9d3b-45c6-b5be-04b23795459e-logs\") pod \"8be4a12e-9d3b-45c6-b5be-04b23795459e\" (UID: \"8be4a12e-9d3b-45c6-b5be-04b23795459e\") " Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.163294 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.163358 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.163369 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjk7q\" (UniqueName: \"kubernetes.io/projected/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a-kube-api-access-wjk7q\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.163374 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8be4a12e-9d3b-45c6-b5be-04b23795459e-logs" (OuterVolumeSpecName: "logs") pod "8be4a12e-9d3b-45c6-b5be-04b23795459e" (UID: "8be4a12e-9d3b-45c6-b5be-04b23795459e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.165746 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be4a12e-9d3b-45c6-b5be-04b23795459e-kube-api-access-qwlsd" (OuterVolumeSpecName: "kube-api-access-qwlsd") pod "8be4a12e-9d3b-45c6-b5be-04b23795459e" (UID: "8be4a12e-9d3b-45c6-b5be-04b23795459e"). InnerVolumeSpecName "kube-api-access-qwlsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.187250 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8be4a12e-9d3b-45c6-b5be-04b23795459e-config-data" (OuterVolumeSpecName: "config-data") pod "8be4a12e-9d3b-45c6-b5be-04b23795459e" (UID: "8be4a12e-9d3b-45c6-b5be-04b23795459e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.264610 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwlsd\" (UniqueName: \"kubernetes.io/projected/8be4a12e-9d3b-45c6-b5be-04b23795459e-kube-api-access-qwlsd\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.264645 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be4a12e-9d3b-45c6-b5be-04b23795459e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.264658 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8be4a12e-9d3b-45c6-b5be-04b23795459e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.286643 4875 generic.go:334] "Generic (PLEG): container finished" podID="02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" containerID="7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a" exitCode=0 Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.286676 4875 generic.go:334] "Generic (PLEG): container finished" podID="02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" containerID="3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39" exitCode=143 Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.286717 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.286724 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a","Type":"ContainerDied","Data":"7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a"} Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.286833 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a","Type":"ContainerDied","Data":"3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39"} Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.286844 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"02521e5a-0702-44ab-a1c2-81f6dfe3eb3a","Type":"ContainerDied","Data":"f4827f26dd1002e42a57c56fe02149d5efaa510d9fe3b437a88ce5194b10c238"} Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.286857 4875 scope.go:117] "RemoveContainer" containerID="7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.290999 4875 generic.go:334] "Generic (PLEG): container finished" podID="8be4a12e-9d3b-45c6-b5be-04b23795459e" containerID="d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67" exitCode=0 Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.291036 4875 generic.go:334] "Generic (PLEG): container finished" podID="8be4a12e-9d3b-45c6-b5be-04b23795459e" containerID="a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63" exitCode=143 Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.291044 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.291128 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8be4a12e-9d3b-45c6-b5be-04b23795459e","Type":"ContainerDied","Data":"d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67"} Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.291163 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8be4a12e-9d3b-45c6-b5be-04b23795459e","Type":"ContainerDied","Data":"a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63"} Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.291177 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8be4a12e-9d3b-45c6-b5be-04b23795459e","Type":"ContainerDied","Data":"02cf2764f80f76ba1103bb503ea70e4b097b83504103c0722ff27ea2f78be2ed"} Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.310778 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.322343 4875 scope.go:117] "RemoveContainer" containerID="3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.329688 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.344059 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:27:42 crc kubenswrapper[4875]: E0130 17:27:42.344535 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" containerName="nova-kuttl-api-log" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.344559 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" containerName="nova-kuttl-api-log" Jan 30 17:27:42 crc kubenswrapper[4875]: E0130 17:27:42.344572 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0579fff9-2e84-4cb6-8a96-08144cfecf05" containerName="nova-manage" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.344599 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="0579fff9-2e84-4cb6-8a96-08144cfecf05" containerName="nova-manage" Jan 30 17:27:42 crc kubenswrapper[4875]: E0130 17:27:42.344624 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be4a12e-9d3b-45c6-b5be-04b23795459e" containerName="nova-kuttl-metadata-metadata" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.344633 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be4a12e-9d3b-45c6-b5be-04b23795459e" containerName="nova-kuttl-metadata-metadata" Jan 30 17:27:42 crc kubenswrapper[4875]: E0130 17:27:42.344649 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be4a12e-9d3b-45c6-b5be-04b23795459e" containerName="nova-kuttl-metadata-log" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.344656 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be4a12e-9d3b-45c6-b5be-04b23795459e" containerName="nova-kuttl-metadata-log" Jan 30 17:27:42 crc kubenswrapper[4875]: E0130 17:27:42.344676 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" containerName="nova-kuttl-api-api" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.344685 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" containerName="nova-kuttl-api-api" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.344883 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="0579fff9-2e84-4cb6-8a96-08144cfecf05" containerName="nova-manage" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.344899 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" containerName="nova-kuttl-api-log" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.344915 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" containerName="nova-kuttl-api-api" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.344926 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be4a12e-9d3b-45c6-b5be-04b23795459e" containerName="nova-kuttl-metadata-log" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.344945 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be4a12e-9d3b-45c6-b5be-04b23795459e" containerName="nova-kuttl-metadata-metadata" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.345953 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.349020 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.353613 4875 scope.go:117] "RemoveContainer" containerID="7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.353723 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:27:42 crc kubenswrapper[4875]: E0130 17:27:42.354000 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a\": container with ID starting with 7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a not found: ID does not exist" containerID="7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.354032 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a"} err="failed to get container status \"7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a\": rpc error: code = NotFound desc = could not find container \"7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a\": container with ID starting with 7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a not found: ID does not exist" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.354056 4875 scope.go:117] "RemoveContainer" containerID="3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39" Jan 30 17:27:42 crc kubenswrapper[4875]: E0130 17:27:42.354320 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39\": container with ID starting with 3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39 not found: ID does not exist" containerID="3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.354350 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39"} err="failed to get container status \"3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39\": rpc error: code = NotFound desc = could not find container \"3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39\": container with ID starting with 3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39 not found: ID does not exist" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.354371 4875 scope.go:117] "RemoveContainer" containerID="7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.354598 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a"} err="failed to get container status \"7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a\": rpc error: code = NotFound desc = could not find container \"7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a\": container with ID starting with 7b471d645ec73b7b9cf370c5ea803a0ec778e47cb144808731b4aa04d9e02b6a not found: ID does not exist" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.354622 4875 scope.go:117] "RemoveContainer" containerID="3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.354814 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39"} err="failed to get container status \"3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39\": rpc error: code = NotFound desc = could not find container \"3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39\": container with ID starting with 3503b0e5e5b4f257034f9daac5446085073b3868c1435aa36ae30ff0e7aa2d39 not found: ID does not exist" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.354831 4875 scope.go:117] "RemoveContainer" containerID="d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.360805 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.365653 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7d569a-c682-428e-9d52-ede01d150e74-logs\") pod \"nova-kuttl-api-0\" (UID: \"5c7d569a-c682-428e-9d52-ede01d150e74\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.365766 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr7rm\" (UniqueName: \"kubernetes.io/projected/5c7d569a-c682-428e-9d52-ede01d150e74-kube-api-access-cr7rm\") pod \"nova-kuttl-api-0\" (UID: \"5c7d569a-c682-428e-9d52-ede01d150e74\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.365811 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7d569a-c682-428e-9d52-ede01d150e74-config-data\") pod \"nova-kuttl-api-0\" (UID: \"5c7d569a-c682-428e-9d52-ede01d150e74\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.366378 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.383637 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.384937 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.390115 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.404928 4875 scope.go:117] "RemoveContainer" containerID="a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.413047 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.427918 4875 scope.go:117] "RemoveContainer" containerID="d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67" Jan 30 17:27:42 crc kubenswrapper[4875]: E0130 17:27:42.428366 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67\": container with ID starting with d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67 not found: ID does not exist" containerID="d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.428395 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67"} err="failed to get container status \"d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67\": rpc error: code = NotFound desc = could not find container \"d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67\": container with ID starting with d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67 not found: ID does not exist" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.428452 4875 scope.go:117] "RemoveContainer" containerID="a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63" Jan 30 17:27:42 crc kubenswrapper[4875]: E0130 17:27:42.428689 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63\": container with ID starting with a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63 not found: ID does not exist" containerID="a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.428709 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63"} err="failed to get container status \"a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63\": rpc error: code = NotFound desc = could not find container \"a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63\": container with ID starting with a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63 not found: ID does not exist" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.428733 4875 scope.go:117] "RemoveContainer" containerID="d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.429018 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67"} err="failed to get container status \"d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67\": rpc error: code = NotFound desc = could not find container \"d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67\": container with ID starting with d7b25743e1f368c27ece26db41a2f1212a7b2046d14486f890e98a83fda6cc67 not found: ID does not exist" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.429036 4875 scope.go:117] "RemoveContainer" containerID="a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.429244 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63"} err="failed to get container status \"a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63\": rpc error: code = NotFound desc = could not find container \"a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63\": container with ID starting with a75be027123f124a45dc3f0f9c26aa9eef51a392cf919d9be5604c62f565ed63 not found: ID does not exist" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.467308 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr7rm\" (UniqueName: \"kubernetes.io/projected/5c7d569a-c682-428e-9d52-ede01d150e74-kube-api-access-cr7rm\") pod \"nova-kuttl-api-0\" (UID: \"5c7d569a-c682-428e-9d52-ede01d150e74\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.467362 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7d569a-c682-428e-9d52-ede01d150e74-config-data\") pod \"nova-kuttl-api-0\" (UID: \"5c7d569a-c682-428e-9d52-ede01d150e74\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.467385 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.467404 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.467423 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7d569a-c682-428e-9d52-ede01d150e74-logs\") pod \"nova-kuttl-api-0\" (UID: \"5c7d569a-c682-428e-9d52-ede01d150e74\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.467481 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwbgn\" (UniqueName: \"kubernetes.io/projected/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-kube-api-access-vwbgn\") pod \"nova-kuttl-metadata-0\" (UID: \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.467764 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7d569a-c682-428e-9d52-ede01d150e74-logs\") pod \"nova-kuttl-api-0\" (UID: \"5c7d569a-c682-428e-9d52-ede01d150e74\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.471645 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7d569a-c682-428e-9d52-ede01d150e74-config-data\") pod \"nova-kuttl-api-0\" (UID: \"5c7d569a-c682-428e-9d52-ede01d150e74\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.482108 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr7rm\" (UniqueName: \"kubernetes.io/projected/5c7d569a-c682-428e-9d52-ede01d150e74-kube-api-access-cr7rm\") pod \"nova-kuttl-api-0\" (UID: \"5c7d569a-c682-428e-9d52-ede01d150e74\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.568983 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwbgn\" (UniqueName: \"kubernetes.io/projected/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-kube-api-access-vwbgn\") pod \"nova-kuttl-metadata-0\" (UID: \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.569447 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.569477 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.569934 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.573521 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.592142 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwbgn\" (UniqueName: \"kubernetes.io/projected/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-kube-api-access-vwbgn\") pod \"nova-kuttl-metadata-0\" (UID: \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.693717 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:42 crc kubenswrapper[4875]: I0130 17:27:42.712627 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:43 crc kubenswrapper[4875]: I0130 17:27:43.123727 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:27:43 crc kubenswrapper[4875]: I0130 17:27:43.212890 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:27:43 crc kubenswrapper[4875]: I0130 17:27:43.300866 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4917e806-9c01-4f14-acad-5bf4fa6a6ca9","Type":"ContainerStarted","Data":"b9a7a3a29f0d1feb7b04b4ece4d2e0bad736817f833dd0471089b409148b7d60"} Jan 30 17:27:43 crc kubenswrapper[4875]: I0130 17:27:43.302601 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5c7d569a-c682-428e-9d52-ede01d150e74","Type":"ContainerStarted","Data":"44a95ade3b469d1020cd21f290c27d6a5a56a03fa3ed17cb2c937ab9d32398b2"} Jan 30 17:27:43 crc kubenswrapper[4875]: I0130 17:27:43.302630 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5c7d569a-c682-428e-9d52-ede01d150e74","Type":"ContainerStarted","Data":"611c11f2da637422427067d5cef29726ef0188d6e8a0fca5252088e5f0313304"} Jan 30 17:27:43 crc kubenswrapper[4875]: I0130 17:27:43.340647 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:27:43 crc kubenswrapper[4875]: I0130 17:27:43.358945 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:27:44 crc kubenswrapper[4875]: I0130 17:27:44.148324 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02521e5a-0702-44ab-a1c2-81f6dfe3eb3a" path="/var/lib/kubelet/pods/02521e5a-0702-44ab-a1c2-81f6dfe3eb3a/volumes" Jan 30 17:27:44 crc kubenswrapper[4875]: I0130 17:27:44.149416 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be4a12e-9d3b-45c6-b5be-04b23795459e" path="/var/lib/kubelet/pods/8be4a12e-9d3b-45c6-b5be-04b23795459e/volumes" Jan 30 17:27:44 crc kubenswrapper[4875]: I0130 17:27:44.318146 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4917e806-9c01-4f14-acad-5bf4fa6a6ca9","Type":"ContainerStarted","Data":"70498c8010e5068ede962bd0d95289ae5e10da9e7e0138b60f8d0dff2421240e"} Jan 30 17:27:44 crc kubenswrapper[4875]: I0130 17:27:44.318213 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4917e806-9c01-4f14-acad-5bf4fa6a6ca9","Type":"ContainerStarted","Data":"0e7417e84b78214df7d28c1b79f358a3fef5437158b495cda2d7c9ec3147a758"} Jan 30 17:27:44 crc kubenswrapper[4875]: I0130 17:27:44.320966 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5c7d569a-c682-428e-9d52-ede01d150e74","Type":"ContainerStarted","Data":"ce82874f7600d1aebf957b002188a017f0ed4786f03a493561c32feb47f9158c"} Jan 30 17:27:44 crc kubenswrapper[4875]: I0130 17:27:44.331390 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:27:44 crc kubenswrapper[4875]: I0130 17:27:44.341694 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.341674182 podStartE2EDuration="2.341674182s" podCreationTimestamp="2026-01-30 17:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:44.333243823 +0000 UTC m=+1874.880607226" watchObservedRunningTime="2026-01-30 17:27:44.341674182 +0000 UTC m=+1874.889037565" Jan 30 17:27:44 crc kubenswrapper[4875]: I0130 17:27:44.384391 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.384374404 podStartE2EDuration="2.384374404s" podCreationTimestamp="2026-01-30 17:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:44.367056852 +0000 UTC m=+1874.914420255" watchObservedRunningTime="2026-01-30 17:27:44.384374404 +0000 UTC m=+1874.931737787" Jan 30 17:27:45 crc kubenswrapper[4875]: I0130 17:27:45.741366 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:45 crc kubenswrapper[4875]: I0130 17:27:45.825985 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e86b1a7-9d16-4f4e-99f2-70d7d4819d83-config-data\") pod \"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83\" (UID: \"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83\") " Jan 30 17:27:45 crc kubenswrapper[4875]: I0130 17:27:45.826317 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj45l\" (UniqueName: \"kubernetes.io/projected/3e86b1a7-9d16-4f4e-99f2-70d7d4819d83-kube-api-access-tj45l\") pod \"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83\" (UID: \"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83\") " Jan 30 17:27:45 crc kubenswrapper[4875]: I0130 17:27:45.834205 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e86b1a7-9d16-4f4e-99f2-70d7d4819d83-kube-api-access-tj45l" (OuterVolumeSpecName: "kube-api-access-tj45l") pod "3e86b1a7-9d16-4f4e-99f2-70d7d4819d83" (UID: "3e86b1a7-9d16-4f4e-99f2-70d7d4819d83"). InnerVolumeSpecName "kube-api-access-tj45l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:27:45 crc kubenswrapper[4875]: I0130 17:27:45.857257 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e86b1a7-9d16-4f4e-99f2-70d7d4819d83-config-data" (OuterVolumeSpecName: "config-data") pod "3e86b1a7-9d16-4f4e-99f2-70d7d4819d83" (UID: "3e86b1a7-9d16-4f4e-99f2-70d7d4819d83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:27:45 crc kubenswrapper[4875]: I0130 17:27:45.928907 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj45l\" (UniqueName: \"kubernetes.io/projected/3e86b1a7-9d16-4f4e-99f2-70d7d4819d83-kube-api-access-tj45l\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:45 crc kubenswrapper[4875]: I0130 17:27:45.928984 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e86b1a7-9d16-4f4e-99f2-70d7d4819d83-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.336086 4875 generic.go:334] "Generic (PLEG): container finished" podID="3e86b1a7-9d16-4f4e-99f2-70d7d4819d83" containerID="f57c50d3733129d489e5e9c939ee00893b5c45783c7f529140619e45d5f35e93" exitCode=0 Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.336129 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83","Type":"ContainerDied","Data":"f57c50d3733129d489e5e9c939ee00893b5c45783c7f529140619e45d5f35e93"} Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.336159 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3e86b1a7-9d16-4f4e-99f2-70d7d4819d83","Type":"ContainerDied","Data":"e37a8f36ad78db36f30515b7265afd29f638dbb37e2198fa674dbaf543cf6ec3"} Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.336164 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.336175 4875 scope.go:117] "RemoveContainer" containerID="f57c50d3733129d489e5e9c939ee00893b5c45783c7f529140619e45d5f35e93" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.362626 4875 scope.go:117] "RemoveContainer" containerID="f57c50d3733129d489e5e9c939ee00893b5c45783c7f529140619e45d5f35e93" Jan 30 17:27:46 crc kubenswrapper[4875]: E0130 17:27:46.363927 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f57c50d3733129d489e5e9c939ee00893b5c45783c7f529140619e45d5f35e93\": container with ID starting with f57c50d3733129d489e5e9c939ee00893b5c45783c7f529140619e45d5f35e93 not found: ID does not exist" containerID="f57c50d3733129d489e5e9c939ee00893b5c45783c7f529140619e45d5f35e93" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.364098 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f57c50d3733129d489e5e9c939ee00893b5c45783c7f529140619e45d5f35e93"} err="failed to get container status \"f57c50d3733129d489e5e9c939ee00893b5c45783c7f529140619e45d5f35e93\": rpc error: code = NotFound desc = could not find container \"f57c50d3733129d489e5e9c939ee00893b5c45783c7f529140619e45d5f35e93\": container with ID starting with f57c50d3733129d489e5e9c939ee00893b5c45783c7f529140619e45d5f35e93 not found: ID does not exist" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.385844 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.396019 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.403889 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:27:46 crc kubenswrapper[4875]: E0130 17:27:46.404263 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e86b1a7-9d16-4f4e-99f2-70d7d4819d83" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.404280 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e86b1a7-9d16-4f4e-99f2-70d7d4819d83" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.404413 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e86b1a7-9d16-4f4e-99f2-70d7d4819d83" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.405020 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.407325 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.411306 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.436572 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d19716-8d68-4ed1-973d-20e4d508e618-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"20d19716-8d68-4ed1-973d-20e4d508e618\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.436652 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlnz6\" (UniqueName: \"kubernetes.io/projected/20d19716-8d68-4ed1-973d-20e4d508e618-kube-api-access-nlnz6\") pod \"nova-kuttl-scheduler-0\" (UID: \"20d19716-8d68-4ed1-973d-20e4d508e618\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.538771 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d19716-8d68-4ed1-973d-20e4d508e618-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"20d19716-8d68-4ed1-973d-20e4d508e618\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.539045 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlnz6\" (UniqueName: \"kubernetes.io/projected/20d19716-8d68-4ed1-973d-20e4d508e618-kube-api-access-nlnz6\") pod \"nova-kuttl-scheduler-0\" (UID: \"20d19716-8d68-4ed1-973d-20e4d508e618\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.543416 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d19716-8d68-4ed1-973d-20e4d508e618-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"20d19716-8d68-4ed1-973d-20e4d508e618\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.556120 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlnz6\" (UniqueName: \"kubernetes.io/projected/20d19716-8d68-4ed1-973d-20e4d508e618-kube-api-access-nlnz6\") pod \"nova-kuttl-scheduler-0\" (UID: \"20d19716-8d68-4ed1-973d-20e4d508e618\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:46 crc kubenswrapper[4875]: I0130 17:27:46.724406 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:47 crc kubenswrapper[4875]: W0130 17:27:47.117433 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20d19716_8d68_4ed1_973d_20e4d508e618.slice/crio-0a21d505ec5c8b3b3a4e021e381677a2f94aa925ffc57ffbf6382f5d28dce481 WatchSource:0}: Error finding container 0a21d505ec5c8b3b3a4e021e381677a2f94aa925ffc57ffbf6382f5d28dce481: Status 404 returned error can't find the container with id 0a21d505ec5c8b3b3a4e021e381677a2f94aa925ffc57ffbf6382f5d28dce481 Jan 30 17:27:47 crc kubenswrapper[4875]: I0130 17:27:47.118567 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:27:47 crc kubenswrapper[4875]: I0130 17:27:47.346217 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"20d19716-8d68-4ed1-973d-20e4d508e618","Type":"ContainerStarted","Data":"0a21d505ec5c8b3b3a4e021e381677a2f94aa925ffc57ffbf6382f5d28dce481"} Jan 30 17:27:47 crc kubenswrapper[4875]: I0130 17:27:47.713202 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:47 crc kubenswrapper[4875]: I0130 17:27:47.713652 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:48 crc kubenswrapper[4875]: I0130 17:27:48.145744 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e86b1a7-9d16-4f4e-99f2-70d7d4819d83" path="/var/lib/kubelet/pods/3e86b1a7-9d16-4f4e-99f2-70d7d4819d83/volumes" Jan 30 17:27:48 crc kubenswrapper[4875]: I0130 17:27:48.356503 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"20d19716-8d68-4ed1-973d-20e4d508e618","Type":"ContainerStarted","Data":"6635c428af298d51e052ecc52b5e4d2f9f1905bcb054bb2e26d07972805c804f"} Jan 30 17:27:48 crc kubenswrapper[4875]: I0130 17:27:48.381057 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.381038523 podStartE2EDuration="2.381038523s" podCreationTimestamp="2026-01-30 17:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:48.376449376 +0000 UTC m=+1878.923812759" watchObservedRunningTime="2026-01-30 17:27:48.381038523 +0000 UTC m=+1878.928401896" Jan 30 17:27:49 crc kubenswrapper[4875]: I0130 17:27:49.691343 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.109754 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2"] Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.111053 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.118924 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.120166 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.125137 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2"] Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.210155 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-config-data\") pod \"nova-kuttl-cell1-cell-mapping-s96h2\" (UID: \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.210234 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-scripts\") pod \"nova-kuttl-cell1-cell-mapping-s96h2\" (UID: \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.210302 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rgr\" (UniqueName: \"kubernetes.io/projected/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-kube-api-access-v4rgr\") pod \"nova-kuttl-cell1-cell-mapping-s96h2\" (UID: \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.312046 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-config-data\") pod \"nova-kuttl-cell1-cell-mapping-s96h2\" (UID: \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.312142 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-scripts\") pod \"nova-kuttl-cell1-cell-mapping-s96h2\" (UID: \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.312236 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4rgr\" (UniqueName: \"kubernetes.io/projected/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-kube-api-access-v4rgr\") pod \"nova-kuttl-cell1-cell-mapping-s96h2\" (UID: \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.318057 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-scripts\") pod \"nova-kuttl-cell1-cell-mapping-s96h2\" (UID: \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.332719 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-config-data\") pod \"nova-kuttl-cell1-cell-mapping-s96h2\" (UID: \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.345419 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4rgr\" (UniqueName: \"kubernetes.io/projected/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-kube-api-access-v4rgr\") pod \"nova-kuttl-cell1-cell-mapping-s96h2\" (UID: \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.435094 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" Jan 30 17:27:50 crc kubenswrapper[4875]: I0130 17:27:50.910311 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2"] Jan 30 17:27:51 crc kubenswrapper[4875]: I0130 17:27:51.384380 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" event={"ID":"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7","Type":"ContainerStarted","Data":"94f3af0360fd6badd605b830b7231cd9bce2de25e8225e009bfc0631503624fd"} Jan 30 17:27:51 crc kubenswrapper[4875]: I0130 17:27:51.384786 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" event={"ID":"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7","Type":"ContainerStarted","Data":"caff93756e74cab47f7632f22cda9fe2377870dcce20c6cfd55a0dc5b935a495"} Jan 30 17:27:51 crc kubenswrapper[4875]: I0130 17:27:51.408764 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" podStartSLOduration=1.408740466 podStartE2EDuration="1.408740466s" podCreationTimestamp="2026-01-30 17:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:27:51.406562466 +0000 UTC m=+1881.953925879" watchObservedRunningTime="2026-01-30 17:27:51.408740466 +0000 UTC m=+1881.956103879" Jan 30 17:27:51 crc kubenswrapper[4875]: I0130 17:27:51.724671 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:52 crc kubenswrapper[4875]: I0130 17:27:52.694316 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:52 crc kubenswrapper[4875]: I0130 17:27:52.694749 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:27:52 crc kubenswrapper[4875]: I0130 17:27:52.713605 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:52 crc kubenswrapper[4875]: I0130 17:27:52.713663 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:27:53 crc kubenswrapper[4875]: I0130 17:27:53.859783 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="5c7d569a-c682-428e-9d52-ede01d150e74" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.168:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:27:53 crc kubenswrapper[4875]: I0130 17:27:53.859861 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4917e806-9c01-4f14-acad-5bf4fa6a6ca9" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.169:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:27:53 crc kubenswrapper[4875]: I0130 17:27:53.859934 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4917e806-9c01-4f14-acad-5bf4fa6a6ca9" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.169:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:27:53 crc kubenswrapper[4875]: I0130 17:27:53.859949 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="5c7d569a-c682-428e-9d52-ede01d150e74" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.168:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:27:56 crc kubenswrapper[4875]: I0130 17:27:56.704033 4875 generic.go:334] "Generic (PLEG): container finished" podID="9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7" containerID="94f3af0360fd6badd605b830b7231cd9bce2de25e8225e009bfc0631503624fd" exitCode=0 Jan 30 17:27:56 crc kubenswrapper[4875]: I0130 17:27:56.704159 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" event={"ID":"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7","Type":"ContainerDied","Data":"94f3af0360fd6badd605b830b7231cd9bce2de25e8225e009bfc0631503624fd"} Jan 30 17:27:56 crc kubenswrapper[4875]: I0130 17:27:56.725290 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:56 crc kubenswrapper[4875]: I0130 17:27:56.762065 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:57 crc kubenswrapper[4875]: I0130 17:27:57.748887 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.010569 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.148568 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4rgr\" (UniqueName: \"kubernetes.io/projected/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-kube-api-access-v4rgr\") pod \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\" (UID: \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\") " Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.148760 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-scripts\") pod \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\" (UID: \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\") " Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.148870 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-config-data\") pod \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\" (UID: \"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7\") " Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.155198 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-scripts" (OuterVolumeSpecName: "scripts") pod "9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7" (UID: "9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.156255 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-kube-api-access-v4rgr" (OuterVolumeSpecName: "kube-api-access-v4rgr") pod "9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7" (UID: "9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7"). InnerVolumeSpecName "kube-api-access-v4rgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.171678 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-config-data" (OuterVolumeSpecName: "config-data") pod "9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7" (UID: "9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.250335 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4rgr\" (UniqueName: \"kubernetes.io/projected/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-kube-api-access-v4rgr\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.250376 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.250386 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.720326 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.720310 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2" event={"ID":"9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7","Type":"ContainerDied","Data":"caff93756e74cab47f7632f22cda9fe2377870dcce20c6cfd55a0dc5b935a495"} Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.720537 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caff93756e74cab47f7632f22cda9fe2377870dcce20c6cfd55a0dc5b935a495" Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.910723 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.910934 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="5c7d569a-c682-428e-9d52-ede01d150e74" containerName="nova-kuttl-api-log" containerID="cri-o://44a95ade3b469d1020cd21f290c27d6a5a56a03fa3ed17cb2c937ab9d32398b2" gracePeriod=30 Jan 30 17:27:58 crc kubenswrapper[4875]: I0130 17:27:58.911043 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="5c7d569a-c682-428e-9d52-ede01d150e74" containerName="nova-kuttl-api-api" containerID="cri-o://ce82874f7600d1aebf957b002188a017f0ed4786f03a493561c32feb47f9158c" gracePeriod=30 Jan 30 17:27:59 crc kubenswrapper[4875]: I0130 17:27:59.031721 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:27:59 crc kubenswrapper[4875]: I0130 17:27:59.046046 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:27:59 crc kubenswrapper[4875]: I0130 17:27:59.046352 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4917e806-9c01-4f14-acad-5bf4fa6a6ca9" containerName="nova-kuttl-metadata-log" containerID="cri-o://0e7417e84b78214df7d28c1b79f358a3fef5437158b495cda2d7c9ec3147a758" gracePeriod=30 Jan 30 17:27:59 crc kubenswrapper[4875]: I0130 17:27:59.046795 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="4917e806-9c01-4f14-acad-5bf4fa6a6ca9" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://70498c8010e5068ede962bd0d95289ae5e10da9e7e0138b60f8d0dff2421240e" gracePeriod=30 Jan 30 17:27:59 crc kubenswrapper[4875]: I0130 17:27:59.729011 4875 generic.go:334] "Generic (PLEG): container finished" podID="4917e806-9c01-4f14-acad-5bf4fa6a6ca9" containerID="0e7417e84b78214df7d28c1b79f358a3fef5437158b495cda2d7c9ec3147a758" exitCode=143 Jan 30 17:27:59 crc kubenswrapper[4875]: I0130 17:27:59.729080 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4917e806-9c01-4f14-acad-5bf4fa6a6ca9","Type":"ContainerDied","Data":"0e7417e84b78214df7d28c1b79f358a3fef5437158b495cda2d7c9ec3147a758"} Jan 30 17:27:59 crc kubenswrapper[4875]: I0130 17:27:59.730971 4875 generic.go:334] "Generic (PLEG): container finished" podID="5c7d569a-c682-428e-9d52-ede01d150e74" containerID="44a95ade3b469d1020cd21f290c27d6a5a56a03fa3ed17cb2c937ab9d32398b2" exitCode=143 Jan 30 17:27:59 crc kubenswrapper[4875]: I0130 17:27:59.731065 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5c7d569a-c682-428e-9d52-ede01d150e74","Type":"ContainerDied","Data":"44a95ade3b469d1020cd21f290c27d6a5a56a03fa3ed17cb2c937ab9d32398b2"} Jan 30 17:27:59 crc kubenswrapper[4875]: I0130 17:27:59.731128 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="20d19716-8d68-4ed1-973d-20e4d508e618" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://6635c428af298d51e052ecc52b5e4d2f9f1905bcb054bb2e26d07972805c804f" gracePeriod=30 Jan 30 17:28:01 crc kubenswrapper[4875]: E0130 17:28:01.727435 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6635c428af298d51e052ecc52b5e4d2f9f1905bcb054bb2e26d07972805c804f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:28:01 crc kubenswrapper[4875]: E0130 17:28:01.729219 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6635c428af298d51e052ecc52b5e4d2f9f1905bcb054bb2e26d07972805c804f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:28:01 crc kubenswrapper[4875]: E0130 17:28:01.731105 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6635c428af298d51e052ecc52b5e4d2f9f1905bcb054bb2e26d07972805c804f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:28:01 crc kubenswrapper[4875]: E0130 17:28:01.731151 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="20d19716-8d68-4ed1-973d-20e4d508e618" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.674897 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.723797 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwbgn\" (UniqueName: \"kubernetes.io/projected/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-kube-api-access-vwbgn\") pod \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\" (UID: \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\") " Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.723854 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-logs\") pod \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\" (UID: \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\") " Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.723961 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-config-data\") pod \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\" (UID: \"4917e806-9c01-4f14-acad-5bf4fa6a6ca9\") " Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.725568 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-logs" (OuterVolumeSpecName: "logs") pod "4917e806-9c01-4f14-acad-5bf4fa6a6ca9" (UID: "4917e806-9c01-4f14-acad-5bf4fa6a6ca9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.739727 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-kube-api-access-vwbgn" (OuterVolumeSpecName: "kube-api-access-vwbgn") pod "4917e806-9c01-4f14-acad-5bf4fa6a6ca9" (UID: "4917e806-9c01-4f14-acad-5bf4fa6a6ca9"). InnerVolumeSpecName "kube-api-access-vwbgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.754725 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-config-data" (OuterVolumeSpecName: "config-data") pod "4917e806-9c01-4f14-acad-5bf4fa6a6ca9" (UID: "4917e806-9c01-4f14-acad-5bf4fa6a6ca9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.760658 4875 generic.go:334] "Generic (PLEG): container finished" podID="5c7d569a-c682-428e-9d52-ede01d150e74" containerID="ce82874f7600d1aebf957b002188a017f0ed4786f03a493561c32feb47f9158c" exitCode=0 Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.760711 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5c7d569a-c682-428e-9d52-ede01d150e74","Type":"ContainerDied","Data":"ce82874f7600d1aebf957b002188a017f0ed4786f03a493561c32feb47f9158c"} Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.763176 4875 generic.go:334] "Generic (PLEG): container finished" podID="4917e806-9c01-4f14-acad-5bf4fa6a6ca9" containerID="70498c8010e5068ede962bd0d95289ae5e10da9e7e0138b60f8d0dff2421240e" exitCode=0 Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.763226 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4917e806-9c01-4f14-acad-5bf4fa6a6ca9","Type":"ContainerDied","Data":"70498c8010e5068ede962bd0d95289ae5e10da9e7e0138b60f8d0dff2421240e"} Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.763275 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"4917e806-9c01-4f14-acad-5bf4fa6a6ca9","Type":"ContainerDied","Data":"b9a7a3a29f0d1feb7b04b4ece4d2e0bad736817f833dd0471089b409148b7d60"} Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.763276 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.763299 4875 scope.go:117] "RemoveContainer" containerID="70498c8010e5068ede962bd0d95289ae5e10da9e7e0138b60f8d0dff2421240e" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.784291 4875 scope.go:117] "RemoveContainer" containerID="0e7417e84b78214df7d28c1b79f358a3fef5437158b495cda2d7c9ec3147a758" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.823519 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.823566 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.825849 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.825872 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwbgn\" (UniqueName: \"kubernetes.io/projected/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-kube-api-access-vwbgn\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.825886 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4917e806-9c01-4f14-acad-5bf4fa6a6ca9-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.834079 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:28:02 crc kubenswrapper[4875]: E0130 17:28:02.836032 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4917e806-9c01-4f14-acad-5bf4fa6a6ca9" containerName="nova-kuttl-metadata-metadata" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.836055 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="4917e806-9c01-4f14-acad-5bf4fa6a6ca9" containerName="nova-kuttl-metadata-metadata" Jan 30 17:28:02 crc kubenswrapper[4875]: E0130 17:28:02.836075 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7" containerName="nova-manage" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.836082 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7" containerName="nova-manage" Jan 30 17:28:02 crc kubenswrapper[4875]: E0130 17:28:02.836128 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4917e806-9c01-4f14-acad-5bf4fa6a6ca9" containerName="nova-kuttl-metadata-log" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.836135 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="4917e806-9c01-4f14-acad-5bf4fa6a6ca9" containerName="nova-kuttl-metadata-log" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.836484 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="4917e806-9c01-4f14-acad-5bf4fa6a6ca9" containerName="nova-kuttl-metadata-metadata" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.836498 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7" containerName="nova-manage" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.836520 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="4917e806-9c01-4f14-acad-5bf4fa6a6ca9" containerName="nova-kuttl-metadata-log" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.839026 4875 scope.go:117] "RemoveContainer" containerID="70498c8010e5068ede962bd0d95289ae5e10da9e7e0138b60f8d0dff2421240e" Jan 30 17:28:02 crc kubenswrapper[4875]: E0130 17:28:02.839670 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70498c8010e5068ede962bd0d95289ae5e10da9e7e0138b60f8d0dff2421240e\": container with ID starting with 70498c8010e5068ede962bd0d95289ae5e10da9e7e0138b60f8d0dff2421240e not found: ID does not exist" containerID="70498c8010e5068ede962bd0d95289ae5e10da9e7e0138b60f8d0dff2421240e" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.839707 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70498c8010e5068ede962bd0d95289ae5e10da9e7e0138b60f8d0dff2421240e"} err="failed to get container status \"70498c8010e5068ede962bd0d95289ae5e10da9e7e0138b60f8d0dff2421240e\": rpc error: code = NotFound desc = could not find container \"70498c8010e5068ede962bd0d95289ae5e10da9e7e0138b60f8d0dff2421240e\": container with ID starting with 70498c8010e5068ede962bd0d95289ae5e10da9e7e0138b60f8d0dff2421240e not found: ID does not exist" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.839748 4875 scope.go:117] "RemoveContainer" containerID="0e7417e84b78214df7d28c1b79f358a3fef5437158b495cda2d7c9ec3147a758" Jan 30 17:28:02 crc kubenswrapper[4875]: E0130 17:28:02.839994 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e7417e84b78214df7d28c1b79f358a3fef5437158b495cda2d7c9ec3147a758\": container with ID starting with 0e7417e84b78214df7d28c1b79f358a3fef5437158b495cda2d7c9ec3147a758 not found: ID does not exist" containerID="0e7417e84b78214df7d28c1b79f358a3fef5437158b495cda2d7c9ec3147a758" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.840016 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e7417e84b78214df7d28c1b79f358a3fef5437158b495cda2d7c9ec3147a758"} err="failed to get container status \"0e7417e84b78214df7d28c1b79f358a3fef5437158b495cda2d7c9ec3147a758\": rpc error: code = NotFound desc = could not find container \"0e7417e84b78214df7d28c1b79f358a3fef5437158b495cda2d7c9ec3147a758\": container with ID starting with 0e7417e84b78214df7d28c1b79f358a3fef5437158b495cda2d7c9ec3147a758 not found: ID does not exist" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.841704 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.843533 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.866481 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:28:02 crc kubenswrapper[4875]: I0130 17:28:02.919873 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.028475 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr7rm\" (UniqueName: \"kubernetes.io/projected/5c7d569a-c682-428e-9d52-ede01d150e74-kube-api-access-cr7rm\") pod \"5c7d569a-c682-428e-9d52-ede01d150e74\" (UID: \"5c7d569a-c682-428e-9d52-ede01d150e74\") " Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.028523 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7d569a-c682-428e-9d52-ede01d150e74-logs\") pod \"5c7d569a-c682-428e-9d52-ede01d150e74\" (UID: \"5c7d569a-c682-428e-9d52-ede01d150e74\") " Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.028667 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7d569a-c682-428e-9d52-ede01d150e74-config-data\") pod \"5c7d569a-c682-428e-9d52-ede01d150e74\" (UID: \"5c7d569a-c682-428e-9d52-ede01d150e74\") " Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.028899 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a438b47-7d96-403c-ac75-74677da11940-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8a438b47-7d96-403c-ac75-74677da11940\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.028919 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shhdw\" (UniqueName: \"kubernetes.io/projected/8a438b47-7d96-403c-ac75-74677da11940-kube-api-access-shhdw\") pod \"nova-kuttl-metadata-0\" (UID: \"8a438b47-7d96-403c-ac75-74677da11940\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.028996 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a438b47-7d96-403c-ac75-74677da11940-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8a438b47-7d96-403c-ac75-74677da11940\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.029366 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c7d569a-c682-428e-9d52-ede01d150e74-logs" (OuterVolumeSpecName: "logs") pod "5c7d569a-c682-428e-9d52-ede01d150e74" (UID: "5c7d569a-c682-428e-9d52-ede01d150e74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.034811 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7d569a-c682-428e-9d52-ede01d150e74-kube-api-access-cr7rm" (OuterVolumeSpecName: "kube-api-access-cr7rm") pod "5c7d569a-c682-428e-9d52-ede01d150e74" (UID: "5c7d569a-c682-428e-9d52-ede01d150e74"). InnerVolumeSpecName "kube-api-access-cr7rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.047194 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7d569a-c682-428e-9d52-ede01d150e74-config-data" (OuterVolumeSpecName: "config-data") pod "5c7d569a-c682-428e-9d52-ede01d150e74" (UID: "5c7d569a-c682-428e-9d52-ede01d150e74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.129954 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a438b47-7d96-403c-ac75-74677da11940-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8a438b47-7d96-403c-ac75-74677da11940\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.130071 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shhdw\" (UniqueName: \"kubernetes.io/projected/8a438b47-7d96-403c-ac75-74677da11940-kube-api-access-shhdw\") pod \"nova-kuttl-metadata-0\" (UID: \"8a438b47-7d96-403c-ac75-74677da11940\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.130097 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a438b47-7d96-403c-ac75-74677da11940-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8a438b47-7d96-403c-ac75-74677da11940\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.130208 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr7rm\" (UniqueName: \"kubernetes.io/projected/5c7d569a-c682-428e-9d52-ede01d150e74-kube-api-access-cr7rm\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.130224 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7d569a-c682-428e-9d52-ede01d150e74-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.130238 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7d569a-c682-428e-9d52-ede01d150e74-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.130689 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a438b47-7d96-403c-ac75-74677da11940-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8a438b47-7d96-403c-ac75-74677da11940\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.133566 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a438b47-7d96-403c-ac75-74677da11940-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8a438b47-7d96-403c-ac75-74677da11940\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.146519 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shhdw\" (UniqueName: \"kubernetes.io/projected/8a438b47-7d96-403c-ac75-74677da11940-kube-api-access-shhdw\") pod \"nova-kuttl-metadata-0\" (UID: \"8a438b47-7d96-403c-ac75-74677da11940\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.167238 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.611384 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:28:03 crc kubenswrapper[4875]: W0130 17:28:03.616643 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a438b47_7d96_403c_ac75_74677da11940.slice/crio-34278ba6e30cd10bc3d6def9f4e3c500cf237576335399756aeec1d7988b8b7e WatchSource:0}: Error finding container 34278ba6e30cd10bc3d6def9f4e3c500cf237576335399756aeec1d7988b8b7e: Status 404 returned error can't find the container with id 34278ba6e30cd10bc3d6def9f4e3c500cf237576335399756aeec1d7988b8b7e Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.773226 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a438b47-7d96-403c-ac75-74677da11940","Type":"ContainerStarted","Data":"34278ba6e30cd10bc3d6def9f4e3c500cf237576335399756aeec1d7988b8b7e"} Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.776812 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5c7d569a-c682-428e-9d52-ede01d150e74","Type":"ContainerDied","Data":"611c11f2da637422427067d5cef29726ef0188d6e8a0fca5252088e5f0313304"} Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.776864 4875 scope.go:117] "RemoveContainer" containerID="ce82874f7600d1aebf957b002188a017f0ed4786f03a493561c32feb47f9158c" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.776921 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.797637 4875 scope.go:117] "RemoveContainer" containerID="44a95ade3b469d1020cd21f290c27d6a5a56a03fa3ed17cb2c937ab9d32398b2" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.809806 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.816265 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.836129 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:28:03 crc kubenswrapper[4875]: E0130 17:28:03.836466 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7d569a-c682-428e-9d52-ede01d150e74" containerName="nova-kuttl-api-api" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.836478 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7d569a-c682-428e-9d52-ede01d150e74" containerName="nova-kuttl-api-api" Jan 30 17:28:03 crc kubenswrapper[4875]: E0130 17:28:03.836509 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7d569a-c682-428e-9d52-ede01d150e74" containerName="nova-kuttl-api-log" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.836514 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7d569a-c682-428e-9d52-ede01d150e74" containerName="nova-kuttl-api-log" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.836732 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7d569a-c682-428e-9d52-ede01d150e74" containerName="nova-kuttl-api-log" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.836745 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7d569a-c682-428e-9d52-ede01d150e74" containerName="nova-kuttl-api-api" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.837788 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.841911 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.862159 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.943559 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace9a809-1aa0-434a-9dda-d54b391f0e04-logs\") pod \"nova-kuttl-api-0\" (UID: \"ace9a809-1aa0-434a-9dda-d54b391f0e04\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.943718 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace9a809-1aa0-434a-9dda-d54b391f0e04-config-data\") pod \"nova-kuttl-api-0\" (UID: \"ace9a809-1aa0-434a-9dda-d54b391f0e04\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:03 crc kubenswrapper[4875]: I0130 17:28:03.943797 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tfbm\" (UniqueName: \"kubernetes.io/projected/ace9a809-1aa0-434a-9dda-d54b391f0e04-kube-api-access-9tfbm\") pod \"nova-kuttl-api-0\" (UID: \"ace9a809-1aa0-434a-9dda-d54b391f0e04\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.045278 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace9a809-1aa0-434a-9dda-d54b391f0e04-logs\") pod \"nova-kuttl-api-0\" (UID: \"ace9a809-1aa0-434a-9dda-d54b391f0e04\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.045352 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace9a809-1aa0-434a-9dda-d54b391f0e04-config-data\") pod \"nova-kuttl-api-0\" (UID: \"ace9a809-1aa0-434a-9dda-d54b391f0e04\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.045420 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tfbm\" (UniqueName: \"kubernetes.io/projected/ace9a809-1aa0-434a-9dda-d54b391f0e04-kube-api-access-9tfbm\") pod \"nova-kuttl-api-0\" (UID: \"ace9a809-1aa0-434a-9dda-d54b391f0e04\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.046179 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace9a809-1aa0-434a-9dda-d54b391f0e04-logs\") pod \"nova-kuttl-api-0\" (UID: \"ace9a809-1aa0-434a-9dda-d54b391f0e04\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.051314 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace9a809-1aa0-434a-9dda-d54b391f0e04-config-data\") pod \"nova-kuttl-api-0\" (UID: \"ace9a809-1aa0-434a-9dda-d54b391f0e04\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.067713 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tfbm\" (UniqueName: \"kubernetes.io/projected/ace9a809-1aa0-434a-9dda-d54b391f0e04-kube-api-access-9tfbm\") pod \"nova-kuttl-api-0\" (UID: \"ace9a809-1aa0-434a-9dda-d54b391f0e04\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.148232 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4917e806-9c01-4f14-acad-5bf4fa6a6ca9" path="/var/lib/kubelet/pods/4917e806-9c01-4f14-acad-5bf4fa6a6ca9/volumes" Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.148847 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7d569a-c682-428e-9d52-ede01d150e74" path="/var/lib/kubelet/pods/5c7d569a-c682-428e-9d52-ede01d150e74/volumes" Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.166067 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.575721 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.791496 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a438b47-7d96-403c-ac75-74677da11940","Type":"ContainerStarted","Data":"289218439f4dc907eddae46998a6f973ba70910c4a4afbe82102c54c26816af6"} Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.791560 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a438b47-7d96-403c-ac75-74677da11940","Type":"ContainerStarted","Data":"450f934f2841cc5acb8da606324e8c6ed944feb63e3df1e11ec43af20635b526"} Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.794767 4875 generic.go:334] "Generic (PLEG): container finished" podID="20d19716-8d68-4ed1-973d-20e4d508e618" containerID="6635c428af298d51e052ecc52b5e4d2f9f1905bcb054bb2e26d07972805c804f" exitCode=0 Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.794837 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"20d19716-8d68-4ed1-973d-20e4d508e618","Type":"ContainerDied","Data":"6635c428af298d51e052ecc52b5e4d2f9f1905bcb054bb2e26d07972805c804f"} Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.799095 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"ace9a809-1aa0-434a-9dda-d54b391f0e04","Type":"ContainerStarted","Data":"0ce73d8f741f081cda553a25f727fe277d7a01ad2d74c80444615dd734dec715"} Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.799145 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"ace9a809-1aa0-434a-9dda-d54b391f0e04","Type":"ContainerStarted","Data":"ac782b00fe097aeeb614fc6bd22dc1b6694d9f05724f81edac28624615e77f02"} Jan 30 17:28:04 crc kubenswrapper[4875]: I0130 17:28:04.815507 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.815488098 podStartE2EDuration="2.815488098s" podCreationTimestamp="2026-01-30 17:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:28:04.809944732 +0000 UTC m=+1895.357308125" watchObservedRunningTime="2026-01-30 17:28:04.815488098 +0000 UTC m=+1895.362851481" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.098201 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.263867 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d19716-8d68-4ed1-973d-20e4d508e618-config-data\") pod \"20d19716-8d68-4ed1-973d-20e4d508e618\" (UID: \"20d19716-8d68-4ed1-973d-20e4d508e618\") " Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.264807 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlnz6\" (UniqueName: \"kubernetes.io/projected/20d19716-8d68-4ed1-973d-20e4d508e618-kube-api-access-nlnz6\") pod \"20d19716-8d68-4ed1-973d-20e4d508e618\" (UID: \"20d19716-8d68-4ed1-973d-20e4d508e618\") " Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.285981 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d19716-8d68-4ed1-973d-20e4d508e618-kube-api-access-nlnz6" (OuterVolumeSpecName: "kube-api-access-nlnz6") pod "20d19716-8d68-4ed1-973d-20e4d508e618" (UID: "20d19716-8d68-4ed1-973d-20e4d508e618"). InnerVolumeSpecName "kube-api-access-nlnz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.288271 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d19716-8d68-4ed1-973d-20e4d508e618-config-data" (OuterVolumeSpecName: "config-data") pod "20d19716-8d68-4ed1-973d-20e4d508e618" (UID: "20d19716-8d68-4ed1-973d-20e4d508e618"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.366769 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d19716-8d68-4ed1-973d-20e4d508e618-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.366819 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlnz6\" (UniqueName: \"kubernetes.io/projected/20d19716-8d68-4ed1-973d-20e4d508e618-kube-api-access-nlnz6\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.808777 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"ace9a809-1aa0-434a-9dda-d54b391f0e04","Type":"ContainerStarted","Data":"bd81be4df4049c1f946fbd3f606e7f77c6c8030971ebec5034a7edc59a1c0cfc"} Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.810514 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"20d19716-8d68-4ed1-973d-20e4d508e618","Type":"ContainerDied","Data":"0a21d505ec5c8b3b3a4e021e381677a2f94aa925ffc57ffbf6382f5d28dce481"} Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.810566 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.810572 4875 scope.go:117] "RemoveContainer" containerID="6635c428af298d51e052ecc52b5e4d2f9f1905bcb054bb2e26d07972805c804f" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.838505 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.8384810959999998 podStartE2EDuration="2.838481096s" podCreationTimestamp="2026-01-30 17:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:28:05.833549729 +0000 UTC m=+1896.380913112" watchObservedRunningTime="2026-01-30 17:28:05.838481096 +0000 UTC m=+1896.385844499" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.851514 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.863274 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.876444 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:28:05 crc kubenswrapper[4875]: E0130 17:28:05.877098 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d19716-8d68-4ed1-973d-20e4d508e618" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.877118 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d19716-8d68-4ed1-973d-20e4d508e618" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.877432 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d19716-8d68-4ed1-973d-20e4d508e618" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.878313 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.881468 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.899319 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.978248 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017099d9-455f-4e89-b38a-1a5400faec32-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"017099d9-455f-4e89-b38a-1a5400faec32\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:05 crc kubenswrapper[4875]: I0130 17:28:05.978379 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpxff\" (UniqueName: \"kubernetes.io/projected/017099d9-455f-4e89-b38a-1a5400faec32-kube-api-access-tpxff\") pod \"nova-kuttl-scheduler-0\" (UID: \"017099d9-455f-4e89-b38a-1a5400faec32\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:06 crc kubenswrapper[4875]: I0130 17:28:06.080350 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017099d9-455f-4e89-b38a-1a5400faec32-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"017099d9-455f-4e89-b38a-1a5400faec32\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:06 crc kubenswrapper[4875]: I0130 17:28:06.080718 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpxff\" (UniqueName: \"kubernetes.io/projected/017099d9-455f-4e89-b38a-1a5400faec32-kube-api-access-tpxff\") pod \"nova-kuttl-scheduler-0\" (UID: \"017099d9-455f-4e89-b38a-1a5400faec32\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:06 crc kubenswrapper[4875]: I0130 17:28:06.084849 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017099d9-455f-4e89-b38a-1a5400faec32-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"017099d9-455f-4e89-b38a-1a5400faec32\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:06 crc kubenswrapper[4875]: I0130 17:28:06.097473 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpxff\" (UniqueName: \"kubernetes.io/projected/017099d9-455f-4e89-b38a-1a5400faec32-kube-api-access-tpxff\") pod \"nova-kuttl-scheduler-0\" (UID: \"017099d9-455f-4e89-b38a-1a5400faec32\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:06 crc kubenswrapper[4875]: I0130 17:28:06.152234 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d19716-8d68-4ed1-973d-20e4d508e618" path="/var/lib/kubelet/pods/20d19716-8d68-4ed1-973d-20e4d508e618/volumes" Jan 30 17:28:06 crc kubenswrapper[4875]: I0130 17:28:06.202048 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:06 crc kubenswrapper[4875]: I0130 17:28:06.656381 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:28:06 crc kubenswrapper[4875]: I0130 17:28:06.821152 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"017099d9-455f-4e89-b38a-1a5400faec32","Type":"ContainerStarted","Data":"d5f1ede68a7d5c4b07e7080736a1bf069f7dd291471ebbcf20234f4710b64de3"} Jan 30 17:28:07 crc kubenswrapper[4875]: I0130 17:28:07.830572 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"017099d9-455f-4e89-b38a-1a5400faec32","Type":"ContainerStarted","Data":"5ea465c5a95c127d86e40c2ae4a590c6cfe38766e828127f7eb6c7ef453b2934"} Jan 30 17:28:07 crc kubenswrapper[4875]: I0130 17:28:07.853410 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.853361378 podStartE2EDuration="2.853361378s" podCreationTimestamp="2026-01-30 17:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:28:07.850045142 +0000 UTC m=+1898.397408535" watchObservedRunningTime="2026-01-30 17:28:07.853361378 +0000 UTC m=+1898.400724761" Jan 30 17:28:08 crc kubenswrapper[4875]: I0130 17:28:08.168190 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:08 crc kubenswrapper[4875]: I0130 17:28:08.168274 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:11 crc kubenswrapper[4875]: I0130 17:28:11.203086 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:13 crc kubenswrapper[4875]: I0130 17:28:13.167687 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:13 crc kubenswrapper[4875]: I0130 17:28:13.167999 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:14 crc kubenswrapper[4875]: I0130 17:28:14.166481 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:14 crc kubenswrapper[4875]: I0130 17:28:14.166854 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:14 crc kubenswrapper[4875]: I0130 17:28:14.208781 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a438b47-7d96-403c-ac75-74677da11940" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.172:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:28:14 crc kubenswrapper[4875]: I0130 17:28:14.250811 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a438b47-7d96-403c-ac75-74677da11940" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.172:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:28:15 crc kubenswrapper[4875]: I0130 17:28:15.248779 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="ace9a809-1aa0-434a-9dda-d54b391f0e04" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.173:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:28:15 crc kubenswrapper[4875]: I0130 17:28:15.248910 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="ace9a809-1aa0-434a-9dda-d54b391f0e04" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.173:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:28:16 crc kubenswrapper[4875]: I0130 17:28:16.202317 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:16 crc kubenswrapper[4875]: I0130 17:28:16.228388 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:16 crc kubenswrapper[4875]: I0130 17:28:16.922331 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:28:23 crc kubenswrapper[4875]: I0130 17:28:23.170136 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:23 crc kubenswrapper[4875]: I0130 17:28:23.170777 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:23 crc kubenswrapper[4875]: I0130 17:28:23.172441 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:23 crc kubenswrapper[4875]: I0130 17:28:23.173188 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:28:24 crc kubenswrapper[4875]: I0130 17:28:24.175805 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:24 crc kubenswrapper[4875]: I0130 17:28:24.176247 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:24 crc kubenswrapper[4875]: I0130 17:28:24.187968 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:24 crc kubenswrapper[4875]: I0130 17:28:24.220502 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:24 crc kubenswrapper[4875]: I0130 17:28:24.952018 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:24 crc kubenswrapper[4875]: I0130 17:28:24.977454 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.510485 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.512517 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.525211 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.528009 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-config-data\") pod \"nova-kuttl-api-2\" (UID: \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.528061 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqcd6\" (UniqueName: \"kubernetes.io/projected/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-kube-api-access-wqcd6\") pod \"nova-kuttl-api-2\" (UID: \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.528125 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-logs\") pod \"nova-kuttl-api-2\" (UID: \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.537721 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.539379 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.582746 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.630494 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-config-data\") pod \"nova-kuttl-api-2\" (UID: \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.630602 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqcd6\" (UniqueName: \"kubernetes.io/projected/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-kube-api-access-wqcd6\") pod \"nova-kuttl-api-2\" (UID: \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.630638 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83d1464-a979-48ab-9f94-cf47197505d4-config-data\") pod \"nova-kuttl-api-1\" (UID: \"c83d1464-a979-48ab-9f94-cf47197505d4\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.630662 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vrdd\" (UniqueName: \"kubernetes.io/projected/c83d1464-a979-48ab-9f94-cf47197505d4-kube-api-access-9vrdd\") pod \"nova-kuttl-api-1\" (UID: \"c83d1464-a979-48ab-9f94-cf47197505d4\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.630684 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83d1464-a979-48ab-9f94-cf47197505d4-logs\") pod \"nova-kuttl-api-1\" (UID: \"c83d1464-a979-48ab-9f94-cf47197505d4\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.630721 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-logs\") pod \"nova-kuttl-api-2\" (UID: \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.631256 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-logs\") pod \"nova-kuttl-api-2\" (UID: \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.644366 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-config-data\") pod \"nova-kuttl-api-2\" (UID: \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.648322 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqcd6\" (UniqueName: \"kubernetes.io/projected/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-kube-api-access-wqcd6\") pod \"nova-kuttl-api-2\" (UID: \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.732056 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83d1464-a979-48ab-9f94-cf47197505d4-config-data\") pod \"nova-kuttl-api-1\" (UID: \"c83d1464-a979-48ab-9f94-cf47197505d4\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.732139 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vrdd\" (UniqueName: \"kubernetes.io/projected/c83d1464-a979-48ab-9f94-cf47197505d4-kube-api-access-9vrdd\") pod \"nova-kuttl-api-1\" (UID: \"c83d1464-a979-48ab-9f94-cf47197505d4\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.732173 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83d1464-a979-48ab-9f94-cf47197505d4-logs\") pod \"nova-kuttl-api-1\" (UID: \"c83d1464-a979-48ab-9f94-cf47197505d4\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.733022 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83d1464-a979-48ab-9f94-cf47197505d4-logs\") pod \"nova-kuttl-api-1\" (UID: \"c83d1464-a979-48ab-9f94-cf47197505d4\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.735426 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83d1464-a979-48ab-9f94-cf47197505d4-config-data\") pod \"nova-kuttl-api-1\" (UID: \"c83d1464-a979-48ab-9f94-cf47197505d4\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.748390 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vrdd\" (UniqueName: \"kubernetes.io/projected/c83d1464-a979-48ab-9f94-cf47197505d4-kube-api-access-9vrdd\") pod \"nova-kuttl-api-1\" (UID: \"c83d1464-a979-48ab-9f94-cf47197505d4\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.800725 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.802380 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.807801 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.810761 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.824939 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.833892 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0477cef3-a7d1-4497-8601-8245446e39a2-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"0477cef3-a7d1-4497-8601-8245446e39a2\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.834018 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9316fe4-f7f0-419c-95f0-1144284fad09-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"b9316fe4-f7f0-419c-95f0-1144284fad09\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.834111 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htmt2\" (UniqueName: \"kubernetes.io/projected/b9316fe4-f7f0-419c-95f0-1144284fad09-kube-api-access-htmt2\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"b9316fe4-f7f0-419c-95f0-1144284fad09\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.834174 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k95k2\" (UniqueName: \"kubernetes.io/projected/0477cef3-a7d1-4497-8601-8245446e39a2-kube-api-access-k95k2\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"0477cef3-a7d1-4497-8601-8245446e39a2\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.837170 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.846184 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.855943 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.935704 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9316fe4-f7f0-419c-95f0-1144284fad09-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"b9316fe4-f7f0-419c-95f0-1144284fad09\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.936089 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htmt2\" (UniqueName: \"kubernetes.io/projected/b9316fe4-f7f0-419c-95f0-1144284fad09-kube-api-access-htmt2\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"b9316fe4-f7f0-419c-95f0-1144284fad09\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.936132 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k95k2\" (UniqueName: \"kubernetes.io/projected/0477cef3-a7d1-4497-8601-8245446e39a2-kube-api-access-k95k2\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"0477cef3-a7d1-4497-8601-8245446e39a2\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.936171 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0477cef3-a7d1-4497-8601-8245446e39a2-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"0477cef3-a7d1-4497-8601-8245446e39a2\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.941180 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0477cef3-a7d1-4497-8601-8245446e39a2-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"0477cef3-a7d1-4497-8601-8245446e39a2\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.947971 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9316fe4-f7f0-419c-95f0-1144284fad09-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"b9316fe4-f7f0-419c-95f0-1144284fad09\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.954770 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htmt2\" (UniqueName: \"kubernetes.io/projected/b9316fe4-f7f0-419c-95f0-1144284fad09-kube-api-access-htmt2\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"b9316fe4-f7f0-419c-95f0-1144284fad09\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 30 17:28:27 crc kubenswrapper[4875]: I0130 17:28:27.955150 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k95k2\" (UniqueName: \"kubernetes.io/projected/0477cef3-a7d1-4497-8601-8245446e39a2-kube-api-access-k95k2\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"0477cef3-a7d1-4497-8601-8245446e39a2\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 30 17:28:28 crc kubenswrapper[4875]: I0130 17:28:28.202575 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 30 17:28:28 crc kubenswrapper[4875]: I0130 17:28:28.215915 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 30 17:28:28 crc kubenswrapper[4875]: I0130 17:28:28.321804 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 30 17:28:28 crc kubenswrapper[4875]: W0130 17:28:28.335766 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc83d1464_a979_48ab_9f94_cf47197505d4.slice/crio-6636034b571d4c2e7c317828129e46232c2be044210dd6d8f4e109e4fab9f9f4 WatchSource:0}: Error finding container 6636034b571d4c2e7c317828129e46232c2be044210dd6d8f4e109e4fab9f9f4: Status 404 returned error can't find the container with id 6636034b571d4c2e7c317828129e46232c2be044210dd6d8f4e109e4fab9f9f4 Jan 30 17:28:28 crc kubenswrapper[4875]: I0130 17:28:28.382471 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 30 17:28:28 crc kubenswrapper[4875]: W0130 17:28:28.388575 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4623fd43_ec9d_4b2a_b9d2_a92f1bdc7569.slice/crio-4efe9fcbeb1921ae7550eb977ba7e239c5ad10578ff840d11e63571216147812 WatchSource:0}: Error finding container 4efe9fcbeb1921ae7550eb977ba7e239c5ad10578ff840d11e63571216147812: Status 404 returned error can't find the container with id 4efe9fcbeb1921ae7550eb977ba7e239c5ad10578ff840d11e63571216147812 Jan 30 17:28:28 crc kubenswrapper[4875]: I0130 17:28:28.470673 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 30 17:28:28 crc kubenswrapper[4875]: I0130 17:28:28.806086 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 30 17:28:28 crc kubenswrapper[4875]: I0130 17:28:28.985659 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569","Type":"ContainerStarted","Data":"b7261f31d3bf51d99dd4b4117c0d5d665c678f1f13795e61d6d5839803be8b53"} Jan 30 17:28:28 crc kubenswrapper[4875]: I0130 17:28:28.986064 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569","Type":"ContainerStarted","Data":"0e158db0bfbdfc89042eac9d3d7a6bced03e44f8fecb0b4bd6057d9aa628373f"} Jan 30 17:28:28 crc kubenswrapper[4875]: I0130 17:28:28.986077 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569","Type":"ContainerStarted","Data":"4efe9fcbeb1921ae7550eb977ba7e239c5ad10578ff840d11e63571216147812"} Jan 30 17:28:28 crc kubenswrapper[4875]: I0130 17:28:28.996924 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"c83d1464-a979-48ab-9f94-cf47197505d4","Type":"ContainerStarted","Data":"aeb2755c91de6adfa3e4b24597afdef3636220202ef682e50c9c604efa56dd4f"} Jan 30 17:28:28 crc kubenswrapper[4875]: I0130 17:28:28.996968 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"c83d1464-a979-48ab-9f94-cf47197505d4","Type":"ContainerStarted","Data":"91fa6326c0e846cd326d2bbf1393d5fb94d35c13663eede9617b1fac709582d4"} Jan 30 17:28:28 crc kubenswrapper[4875]: I0130 17:28:28.996979 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"c83d1464-a979-48ab-9f94-cf47197505d4","Type":"ContainerStarted","Data":"6636034b571d4c2e7c317828129e46232c2be044210dd6d8f4e109e4fab9f9f4"} Jan 30 17:28:28 crc kubenswrapper[4875]: I0130 17:28:28.999637 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-2" podStartSLOduration=1.999626707 podStartE2EDuration="1.999626707s" podCreationTimestamp="2026-01-30 17:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:28:28.998505601 +0000 UTC m=+1919.545868984" watchObservedRunningTime="2026-01-30 17:28:28.999626707 +0000 UTC m=+1919.546990090" Jan 30 17:28:29 crc kubenswrapper[4875]: I0130 17:28:29.006171 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"b9316fe4-f7f0-419c-95f0-1144284fad09","Type":"ContainerStarted","Data":"dcfed2539f1203b8476abcdd704e4997291806c00463d54d28c49cd8d39adf41"} Jan 30 17:28:29 crc kubenswrapper[4875]: I0130 17:28:29.009269 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"0477cef3-a7d1-4497-8601-8245446e39a2","Type":"ContainerStarted","Data":"3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc"} Jan 30 17:28:29 crc kubenswrapper[4875]: I0130 17:28:29.009305 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"0477cef3-a7d1-4497-8601-8245446e39a2","Type":"ContainerStarted","Data":"a0c03e14f9c3b44b0471c46c7c972354274e30c5b34a64d8dbbc4bd39983e826"} Jan 30 17:28:29 crc kubenswrapper[4875]: I0130 17:28:29.009444 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 30 17:28:29 crc kubenswrapper[4875]: I0130 17:28:29.021063 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-1" podStartSLOduration=2.02104689 podStartE2EDuration="2.02104689s" podCreationTimestamp="2026-01-30 17:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:28:29.015272106 +0000 UTC m=+1919.562635489" watchObservedRunningTime="2026-01-30 17:28:29.02104689 +0000 UTC m=+1919.568410273" Jan 30 17:28:29 crc kubenswrapper[4875]: I0130 17:28:29.035808 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podStartSLOduration=2.035785671 podStartE2EDuration="2.035785671s" podCreationTimestamp="2026-01-30 17:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:28:29.027260549 +0000 UTC m=+1919.574623942" watchObservedRunningTime="2026-01-30 17:28:29.035785671 +0000 UTC m=+1919.583149054" Jan 30 17:28:30 crc kubenswrapper[4875]: I0130 17:28:30.018144 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"b9316fe4-f7f0-419c-95f0-1144284fad09","Type":"ContainerStarted","Data":"bcd10314f3ccef71c79e77546abed5f566274be35a94b903e61d1915107e2bdd"} Jan 30 17:28:30 crc kubenswrapper[4875]: I0130 17:28:30.019173 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 30 17:28:30 crc kubenswrapper[4875]: I0130 17:28:30.038786 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podStartSLOduration=3.03876904 podStartE2EDuration="3.03876904s" podCreationTimestamp="2026-01-30 17:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:28:30.032448628 +0000 UTC m=+1920.579812021" watchObservedRunningTime="2026-01-30 17:28:30.03876904 +0000 UTC m=+1920.586132423" Jan 30 17:28:33 crc kubenswrapper[4875]: I0130 17:28:33.230787 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 30 17:28:37 crc kubenswrapper[4875]: I0130 17:28:37.837403 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:37 crc kubenswrapper[4875]: I0130 17:28:37.838285 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:37 crc kubenswrapper[4875]: I0130 17:28:37.856995 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:37 crc kubenswrapper[4875]: I0130 17:28:37.857048 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:38 crc kubenswrapper[4875]: I0130 17:28:38.243327 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.001729 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.175:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.001759 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="c83d1464-a979-48ab-9f94-cf47197505d4" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.176:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.001818 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.175:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.001806 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="c83d1464-a979-48ab-9f94-cf47197505d4" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.176:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.397128 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.398233 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.422968 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.424256 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.430007 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmvfg\" (UniqueName: \"kubernetes.io/projected/58bd828d-3607-4a68-adb6-05c6e555631a-kube-api-access-qmvfg\") pod \"nova-kuttl-scheduler-1\" (UID: \"58bd828d-3607-4a68-adb6-05c6e555631a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.430099 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58bd828d-3607-4a68-adb6-05c6e555631a-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"58bd828d-3607-4a68-adb6-05c6e555631a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.435009 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.454794 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.470008 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.471372 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.477636 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.479463 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.487702 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.496195 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.531367 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb2eb9-9adf-4433-835a-7302ff4b13b2-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.531445 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1426e7d-e54e-492d-816c-1e8937cce809-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"e1426e7d-e54e-492d-816c-1e8937cce809\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.531464 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4l5w\" (UniqueName: \"kubernetes.io/projected/86cb2eb9-9adf-4433-835a-7302ff4b13b2-kube-api-access-k4l5w\") pod \"nova-kuttl-metadata-2\" (UID: \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.531481 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6222d09-d842-407b-97bd-d872fca5510d-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"f6222d09-d842-407b-97bd-d872fca5510d\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.531547 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58bd828d-3607-4a68-adb6-05c6e555631a-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"58bd828d-3607-4a68-adb6-05c6e555631a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.531578 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbgtl\" (UniqueName: \"kubernetes.io/projected/f6222d09-d842-407b-97bd-d872fca5510d-kube-api-access-rbgtl\") pod \"nova-kuttl-metadata-1\" (UID: \"f6222d09-d842-407b-97bd-d872fca5510d\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.531630 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxzb5\" (UniqueName: \"kubernetes.io/projected/e1426e7d-e54e-492d-816c-1e8937cce809-kube-api-access-dxzb5\") pod \"nova-kuttl-scheduler-2\" (UID: \"e1426e7d-e54e-492d-816c-1e8937cce809\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.531650 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6222d09-d842-407b-97bd-d872fca5510d-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"f6222d09-d842-407b-97bd-d872fca5510d\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.531673 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cb2eb9-9adf-4433-835a-7302ff4b13b2-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.531732 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmvfg\" (UniqueName: \"kubernetes.io/projected/58bd828d-3607-4a68-adb6-05c6e555631a-kube-api-access-qmvfg\") pod \"nova-kuttl-scheduler-1\" (UID: \"58bd828d-3607-4a68-adb6-05c6e555631a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.541046 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58bd828d-3607-4a68-adb6-05c6e555631a-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"58bd828d-3607-4a68-adb6-05c6e555631a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.554012 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmvfg\" (UniqueName: \"kubernetes.io/projected/58bd828d-3607-4a68-adb6-05c6e555631a-kube-api-access-qmvfg\") pod \"nova-kuttl-scheduler-1\" (UID: \"58bd828d-3607-4a68-adb6-05c6e555631a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.633458 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxzb5\" (UniqueName: \"kubernetes.io/projected/e1426e7d-e54e-492d-816c-1e8937cce809-kube-api-access-dxzb5\") pod \"nova-kuttl-scheduler-2\" (UID: \"e1426e7d-e54e-492d-816c-1e8937cce809\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.633742 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6222d09-d842-407b-97bd-d872fca5510d-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"f6222d09-d842-407b-97bd-d872fca5510d\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.633895 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cb2eb9-9adf-4433-835a-7302ff4b13b2-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.634048 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb2eb9-9adf-4433-835a-7302ff4b13b2-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.634195 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1426e7d-e54e-492d-816c-1e8937cce809-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"e1426e7d-e54e-492d-816c-1e8937cce809\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.634310 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4l5w\" (UniqueName: \"kubernetes.io/projected/86cb2eb9-9adf-4433-835a-7302ff4b13b2-kube-api-access-k4l5w\") pod \"nova-kuttl-metadata-2\" (UID: \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.634415 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6222d09-d842-407b-97bd-d872fca5510d-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"f6222d09-d842-407b-97bd-d872fca5510d\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.634575 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbgtl\" (UniqueName: \"kubernetes.io/projected/f6222d09-d842-407b-97bd-d872fca5510d-kube-api-access-rbgtl\") pod \"nova-kuttl-metadata-1\" (UID: \"f6222d09-d842-407b-97bd-d872fca5510d\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.634086 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6222d09-d842-407b-97bd-d872fca5510d-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"f6222d09-d842-407b-97bd-d872fca5510d\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.634321 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cb2eb9-9adf-4433-835a-7302ff4b13b2-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.637222 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb2eb9-9adf-4433-835a-7302ff4b13b2-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.637841 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6222d09-d842-407b-97bd-d872fca5510d-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"f6222d09-d842-407b-97bd-d872fca5510d\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.638259 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1426e7d-e54e-492d-816c-1e8937cce809-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"e1426e7d-e54e-492d-816c-1e8937cce809\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.654172 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbgtl\" (UniqueName: \"kubernetes.io/projected/f6222d09-d842-407b-97bd-d872fca5510d-kube-api-access-rbgtl\") pod \"nova-kuttl-metadata-1\" (UID: \"f6222d09-d842-407b-97bd-d872fca5510d\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.655061 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxzb5\" (UniqueName: \"kubernetes.io/projected/e1426e7d-e54e-492d-816c-1e8937cce809-kube-api-access-dxzb5\") pod \"nova-kuttl-scheduler-2\" (UID: \"e1426e7d-e54e-492d-816c-1e8937cce809\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.656470 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4l5w\" (UniqueName: \"kubernetes.io/projected/86cb2eb9-9adf-4433-835a-7302ff4b13b2-kube-api-access-k4l5w\") pod \"nova-kuttl-metadata-2\" (UID: \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.723098 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.747267 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.813268 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:39 crc kubenswrapper[4875]: I0130 17:28:39.827614 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:40 crc kubenswrapper[4875]: W0130 17:28:40.276211 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58bd828d_3607_4a68_adb6_05c6e555631a.slice/crio-a77c99c4e073e0b2ad02dc6b22d56126f5582ba536dfaed45aeec3279122e84f WatchSource:0}: Error finding container a77c99c4e073e0b2ad02dc6b22d56126f5582ba536dfaed45aeec3279122e84f: Status 404 returned error can't find the container with id a77c99c4e073e0b2ad02dc6b22d56126f5582ba536dfaed45aeec3279122e84f Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.276352 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.286823 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 30 17:28:40 crc kubenswrapper[4875]: W0130 17:28:40.288074 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1426e7d_e54e_492d_816c_1e8937cce809.slice/crio-f56e859248ee999de409681c7cec3829c2de7c7e4db0b8edad512b8f0245d38f WatchSource:0}: Error finding container f56e859248ee999de409681c7cec3829c2de7c7e4db0b8edad512b8f0245d38f: Status 404 returned error can't find the container with id f56e859248ee999de409681c7cec3829c2de7c7e4db0b8edad512b8f0245d38f Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.480861 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.487542 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 30 17:28:40 crc kubenswrapper[4875]: W0130 17:28:40.517753 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86cb2eb9_9adf_4433_835a_7302ff4b13b2.slice/crio-bf81d305353e66faa93566fcb72180941bd8d226711c52e88e82b4deb0e3e13f WatchSource:0}: Error finding container bf81d305353e66faa93566fcb72180941bd8d226711c52e88e82b4deb0e3e13f: Status 404 returned error can't find the container with id bf81d305353e66faa93566fcb72180941bd8d226711c52e88e82b4deb0e3e13f Jan 30 17:28:40 crc kubenswrapper[4875]: W0130 17:28:40.522819 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6222d09_d842_407b_97bd_d872fca5510d.slice/crio-f3bef6f5927251e0503aed6bfb09ae82d5a495c563f7172fc084ce0b1df92152 WatchSource:0}: Error finding container f3bef6f5927251e0503aed6bfb09ae82d5a495c563f7172fc084ce0b1df92152: Status 404 returned error can't find the container with id f3bef6f5927251e0503aed6bfb09ae82d5a495c563f7172fc084ce0b1df92152 Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.593071 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.594435 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.606181 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.607274 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.622103 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.647212 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.652345 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ba05e22-391a-4edd-b6d5-ca3964dfb482-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"7ba05e22-391a-4edd-b6d5-ca3964dfb482\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.652394 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbx4\" (UniqueName: \"kubernetes.io/projected/b5008612-2354-43ed-a738-2eef9ae5b76e-kube-api-access-8pbx4\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"b5008612-2354-43ed-a738-2eef9ae5b76e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.652491 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-962ns\" (UniqueName: \"kubernetes.io/projected/7ba05e22-391a-4edd-b6d5-ca3964dfb482-kube-api-access-962ns\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"7ba05e22-391a-4edd-b6d5-ca3964dfb482\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.652516 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5008612-2354-43ed-a738-2eef9ae5b76e-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"b5008612-2354-43ed-a738-2eef9ae5b76e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.753446 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ba05e22-391a-4edd-b6d5-ca3964dfb482-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"7ba05e22-391a-4edd-b6d5-ca3964dfb482\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.753525 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pbx4\" (UniqueName: \"kubernetes.io/projected/b5008612-2354-43ed-a738-2eef9ae5b76e-kube-api-access-8pbx4\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"b5008612-2354-43ed-a738-2eef9ae5b76e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.753852 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-962ns\" (UniqueName: \"kubernetes.io/projected/7ba05e22-391a-4edd-b6d5-ca3964dfb482-kube-api-access-962ns\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"7ba05e22-391a-4edd-b6d5-ca3964dfb482\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.753896 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5008612-2354-43ed-a738-2eef9ae5b76e-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"b5008612-2354-43ed-a738-2eef9ae5b76e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.769553 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5008612-2354-43ed-a738-2eef9ae5b76e-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"b5008612-2354-43ed-a738-2eef9ae5b76e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.769560 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ba05e22-391a-4edd-b6d5-ca3964dfb482-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"7ba05e22-391a-4edd-b6d5-ca3964dfb482\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.776169 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-962ns\" (UniqueName: \"kubernetes.io/projected/7ba05e22-391a-4edd-b6d5-ca3964dfb482-kube-api-access-962ns\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"7ba05e22-391a-4edd-b6d5-ca3964dfb482\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.781330 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pbx4\" (UniqueName: \"kubernetes.io/projected/b5008612-2354-43ed-a738-2eef9ae5b76e-kube-api-access-8pbx4\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"b5008612-2354-43ed-a738-2eef9ae5b76e\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.921700 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 30 17:28:40 crc kubenswrapper[4875]: I0130 17:28:40.950337 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.123681 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"e1426e7d-e54e-492d-816c-1e8937cce809","Type":"ContainerStarted","Data":"fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798"} Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.123927 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"e1426e7d-e54e-492d-816c-1e8937cce809","Type":"ContainerStarted","Data":"f56e859248ee999de409681c7cec3829c2de7c7e4db0b8edad512b8f0245d38f"} Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.129076 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"86cb2eb9-9adf-4433-835a-7302ff4b13b2","Type":"ContainerStarted","Data":"8eea69cc5640da551137353add9e0c6b0a39c59be7d790612653f527ad0011a1"} Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.129116 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"86cb2eb9-9adf-4433-835a-7302ff4b13b2","Type":"ContainerStarted","Data":"fc93cb39617ac268b8e7e71afc2c8b51b8cf3818487d03242c0b51e0f04a527b"} Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.129233 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"86cb2eb9-9adf-4433-835a-7302ff4b13b2","Type":"ContainerStarted","Data":"bf81d305353e66faa93566fcb72180941bd8d226711c52e88e82b4deb0e3e13f"} Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.131932 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"f6222d09-d842-407b-97bd-d872fca5510d","Type":"ContainerStarted","Data":"008c83a3f5adb1c42e9a4347c401990325ba3b9906a7da2976a264b27ec58e00"} Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.131975 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"f6222d09-d842-407b-97bd-d872fca5510d","Type":"ContainerStarted","Data":"16de2dc9b2e33e043e0e3802ca864401fa7e35279379ee1ba2227610c9cea1f6"} Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.132008 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"f6222d09-d842-407b-97bd-d872fca5510d","Type":"ContainerStarted","Data":"f3bef6f5927251e0503aed6bfb09ae82d5a495c563f7172fc084ce0b1df92152"} Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.140448 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podStartSLOduration=2.140429962 podStartE2EDuration="2.140429962s" podCreationTimestamp="2026-01-30 17:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:28:41.140121972 +0000 UTC m=+1931.687485355" watchObservedRunningTime="2026-01-30 17:28:41.140429962 +0000 UTC m=+1931.687793335" Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.143886 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"58bd828d-3607-4a68-adb6-05c6e555631a","Type":"ContainerStarted","Data":"8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d"} Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.143932 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"58bd828d-3607-4a68-adb6-05c6e555631a","Type":"ContainerStarted","Data":"a77c99c4e073e0b2ad02dc6b22d56126f5582ba536dfaed45aeec3279122e84f"} Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.188067 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-1" podStartSLOduration=2.188048281 podStartE2EDuration="2.188048281s" podCreationTimestamp="2026-01-30 17:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:28:41.157702473 +0000 UTC m=+1931.705065856" watchObservedRunningTime="2026-01-30 17:28:41.188048281 +0000 UTC m=+1931.735411664" Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.233154 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-2" podStartSLOduration=2.233129799 podStartE2EDuration="2.233129799s" podCreationTimestamp="2026-01-30 17:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:28:41.179971333 +0000 UTC m=+1931.727334716" watchObservedRunningTime="2026-01-30 17:28:41.233129799 +0000 UTC m=+1931.780493182" Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.263135 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podStartSLOduration=2.263112376 podStartE2EDuration="2.263112376s" podCreationTimestamp="2026-01-30 17:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:28:41.201212301 +0000 UTC m=+1931.748575684" watchObservedRunningTime="2026-01-30 17:28:41.263112376 +0000 UTC m=+1931.810475769" Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.473964 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 30 17:28:41 crc kubenswrapper[4875]: I0130 17:28:41.620673 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 30 17:28:41 crc kubenswrapper[4875]: W0130 17:28:41.630264 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ba05e22_391a_4edd_b6d5_ca3964dfb482.slice/crio-07524a623bbf194edb9de133a1a370ec75b6a705d8b49248385e3ac62cffd5e8 WatchSource:0}: Error finding container 07524a623bbf194edb9de133a1a370ec75b6a705d8b49248385e3ac62cffd5e8: Status 404 returned error can't find the container with id 07524a623bbf194edb9de133a1a370ec75b6a705d8b49248385e3ac62cffd5e8 Jan 30 17:28:42 crc kubenswrapper[4875]: I0130 17:28:42.151167 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"b5008612-2354-43ed-a738-2eef9ae5b76e","Type":"ContainerStarted","Data":"6910fc8307c04489fa88c20354d73cbf945d26cfc30b944a4176cde01620fa23"} Jan 30 17:28:42 crc kubenswrapper[4875]: I0130 17:28:42.151487 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"b5008612-2354-43ed-a738-2eef9ae5b76e","Type":"ContainerStarted","Data":"21789e436b82b570247166f6ad0aad4ac6cbd1071bfc1d9bc1599b1ea87faae7"} Jan 30 17:28:42 crc kubenswrapper[4875]: I0130 17:28:42.151565 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 30 17:28:42 crc kubenswrapper[4875]: I0130 17:28:42.152984 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"7ba05e22-391a-4edd-b6d5-ca3964dfb482","Type":"ContainerStarted","Data":"d5ae90652a4e4dad809da2424a6015bec8f7c0c581d6ccb9a7625d3758b466fe"} Jan 30 17:28:42 crc kubenswrapper[4875]: I0130 17:28:42.153013 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"7ba05e22-391a-4edd-b6d5-ca3964dfb482","Type":"ContainerStarted","Data":"07524a623bbf194edb9de133a1a370ec75b6a705d8b49248385e3ac62cffd5e8"} Jan 30 17:28:42 crc kubenswrapper[4875]: I0130 17:28:42.153369 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 30 17:28:42 crc kubenswrapper[4875]: I0130 17:28:42.167540 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podStartSLOduration=2.16752525 podStartE2EDuration="2.16752525s" podCreationTimestamp="2026-01-30 17:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:28:42.167185299 +0000 UTC m=+1932.714548692" watchObservedRunningTime="2026-01-30 17:28:42.16752525 +0000 UTC m=+1932.714888633" Jan 30 17:28:42 crc kubenswrapper[4875]: I0130 17:28:42.182374 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podStartSLOduration=2.182353853 podStartE2EDuration="2.182353853s" podCreationTimestamp="2026-01-30 17:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:28:42.181893578 +0000 UTC m=+1932.729256971" watchObservedRunningTime="2026-01-30 17:28:42.182353853 +0000 UTC m=+1932.729717236" Jan 30 17:28:44 crc kubenswrapper[4875]: I0130 17:28:44.723470 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:28:44 crc kubenswrapper[4875]: I0130 17:28:44.747636 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:28:44 crc kubenswrapper[4875]: I0130 17:28:44.813838 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:44 crc kubenswrapper[4875]: I0130 17:28:44.813902 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:44 crc kubenswrapper[4875]: I0130 17:28:44.829013 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:44 crc kubenswrapper[4875]: I0130 17:28:44.829071 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:47 crc kubenswrapper[4875]: I0130 17:28:47.843167 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:47 crc kubenswrapper[4875]: I0130 17:28:47.843570 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:47 crc kubenswrapper[4875]: I0130 17:28:47.844142 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:47 crc kubenswrapper[4875]: I0130 17:28:47.844231 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:47 crc kubenswrapper[4875]: I0130 17:28:47.850428 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:47 crc kubenswrapper[4875]: I0130 17:28:47.851949 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:28:47 crc kubenswrapper[4875]: I0130 17:28:47.860881 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:47 crc kubenswrapper[4875]: I0130 17:28:47.861345 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:47 crc kubenswrapper[4875]: I0130 17:28:47.864303 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:47 crc kubenswrapper[4875]: I0130 17:28:47.865000 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:48 crc kubenswrapper[4875]: I0130 17:28:48.204519 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:48 crc kubenswrapper[4875]: I0130 17:28:48.209887 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:28:49 crc kubenswrapper[4875]: I0130 17:28:49.724121 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:28:49 crc kubenswrapper[4875]: I0130 17:28:49.748316 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:28:49 crc kubenswrapper[4875]: I0130 17:28:49.749943 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:28:49 crc kubenswrapper[4875]: I0130 17:28:49.781306 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:28:49 crc kubenswrapper[4875]: I0130 17:28:49.814002 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:49 crc kubenswrapper[4875]: I0130 17:28:49.814047 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:49 crc kubenswrapper[4875]: I0130 17:28:49.829566 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:49 crc kubenswrapper[4875]: I0130 17:28:49.829620 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:50 crc kubenswrapper[4875]: I0130 17:28:50.247061 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:28:50 crc kubenswrapper[4875]: I0130 17:28:50.257170 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:28:50 crc kubenswrapper[4875]: I0130 17:28:50.979441 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 30 17:28:50 crc kubenswrapper[4875]: I0130 17:28:50.984781 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.181:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:28:50 crc kubenswrapper[4875]: I0130 17:28:50.985049 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="f6222d09-d842-407b-97bd-d872fca5510d" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.182:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:28:50 crc kubenswrapper[4875]: I0130 17:28:50.985172 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.181:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:28:50 crc kubenswrapper[4875]: I0130 17:28:50.985502 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="f6222d09-d842-407b-97bd-d872fca5510d" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.182:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:28:50 crc kubenswrapper[4875]: I0130 17:28:50.985814 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 30 17:28:56 crc kubenswrapper[4875]: I0130 17:28:56.286905 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:28:56 crc kubenswrapper[4875]: I0130 17:28:56.287437 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:28:59 crc kubenswrapper[4875]: I0130 17:28:59.816325 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:59 crc kubenswrapper[4875]: I0130 17:28:59.817021 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:59 crc kubenswrapper[4875]: I0130 17:28:59.818698 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:59 crc kubenswrapper[4875]: I0130 17:28:59.819153 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:28:59 crc kubenswrapper[4875]: I0130 17:28:59.831337 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:59 crc kubenswrapper[4875]: I0130 17:28:59.831813 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:28:59 crc kubenswrapper[4875]: I0130 17:28:59.837915 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:29:00 crc kubenswrapper[4875]: I0130 17:29:00.294759 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:29:01 crc kubenswrapper[4875]: I0130 17:29:01.219423 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 30 17:29:01 crc kubenswrapper[4875]: I0130 17:29:01.219834 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" containerName="nova-kuttl-api-log" containerID="cri-o://0e158db0bfbdfc89042eac9d3d7a6bced03e44f8fecb0b4bd6057d9aa628373f" gracePeriod=30 Jan 30 17:29:01 crc kubenswrapper[4875]: I0130 17:29:01.220387 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" containerName="nova-kuttl-api-api" containerID="cri-o://b7261f31d3bf51d99dd4b4117c0d5d665c678f1f13795e61d6d5839803be8b53" gracePeriod=30 Jan 30 17:29:01 crc kubenswrapper[4875]: I0130 17:29:01.232301 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 30 17:29:01 crc kubenswrapper[4875]: I0130 17:29:01.232520 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="c83d1464-a979-48ab-9f94-cf47197505d4" containerName="nova-kuttl-api-log" containerID="cri-o://91fa6326c0e846cd326d2bbf1393d5fb94d35c13663eede9617b1fac709582d4" gracePeriod=30 Jan 30 17:29:01 crc kubenswrapper[4875]: I0130 17:29:01.232549 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="c83d1464-a979-48ab-9f94-cf47197505d4" containerName="nova-kuttl-api-api" containerID="cri-o://aeb2755c91de6adfa3e4b24597afdef3636220202ef682e50c9c604efa56dd4f" gracePeriod=30 Jan 30 17:29:01 crc kubenswrapper[4875]: I0130 17:29:01.460002 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 30 17:29:01 crc kubenswrapper[4875]: I0130 17:29:01.460660 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podUID="0477cef3-a7d1-4497-8601-8245446e39a2" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc" gracePeriod=30 Jan 30 17:29:01 crc kubenswrapper[4875]: I0130 17:29:01.474579 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 30 17:29:01 crc kubenswrapper[4875]: I0130 17:29:01.474856 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podUID="b9316fe4-f7f0-419c-95f0-1144284fad09" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://bcd10314f3ccef71c79e77546abed5f566274be35a94b903e61d1915107e2bdd" gracePeriod=30 Jan 30 17:29:02 crc kubenswrapper[4875]: I0130 17:29:02.307359 4875 generic.go:334] "Generic (PLEG): container finished" podID="4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" containerID="0e158db0bfbdfc89042eac9d3d7a6bced03e44f8fecb0b4bd6057d9aa628373f" exitCode=143 Jan 30 17:29:02 crc kubenswrapper[4875]: I0130 17:29:02.307442 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569","Type":"ContainerDied","Data":"0e158db0bfbdfc89042eac9d3d7a6bced03e44f8fecb0b4bd6057d9aa628373f"} Jan 30 17:29:02 crc kubenswrapper[4875]: I0130 17:29:02.309780 4875 generic.go:334] "Generic (PLEG): container finished" podID="c83d1464-a979-48ab-9f94-cf47197505d4" containerID="91fa6326c0e846cd326d2bbf1393d5fb94d35c13663eede9617b1fac709582d4" exitCode=143 Jan 30 17:29:02 crc kubenswrapper[4875]: I0130 17:29:02.309807 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"c83d1464-a979-48ab-9f94-cf47197505d4","Type":"ContainerDied","Data":"91fa6326c0e846cd326d2bbf1393d5fb94d35c13663eede9617b1fac709582d4"} Jan 30 17:29:03 crc kubenswrapper[4875]: E0130 17:29:03.205528 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:29:03 crc kubenswrapper[4875]: E0130 17:29:03.209217 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:29:03 crc kubenswrapper[4875]: E0130 17:29:03.210762 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:29:03 crc kubenswrapper[4875]: E0130 17:29:03.210806 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podUID="0477cef3-a7d1-4497-8601-8245446e39a2" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:29:03 crc kubenswrapper[4875]: E0130 17:29:03.222020 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcd10314f3ccef71c79e77546abed5f566274be35a94b903e61d1915107e2bdd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:29:03 crc kubenswrapper[4875]: E0130 17:29:03.223982 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcd10314f3ccef71c79e77546abed5f566274be35a94b903e61d1915107e2bdd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:29:03 crc kubenswrapper[4875]: E0130 17:29:03.225293 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcd10314f3ccef71c79e77546abed5f566274be35a94b903e61d1915107e2bdd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:29:03 crc kubenswrapper[4875]: E0130 17:29:03.225360 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podUID="b9316fe4-f7f0-419c-95f0-1144284fad09" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:29:04 crc kubenswrapper[4875]: I0130 17:29:04.904195 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:29:04 crc kubenswrapper[4875]: I0130 17:29:04.911391 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:29:04 crc kubenswrapper[4875]: I0130 17:29:04.968241 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vrdd\" (UniqueName: \"kubernetes.io/projected/c83d1464-a979-48ab-9f94-cf47197505d4-kube-api-access-9vrdd\") pod \"c83d1464-a979-48ab-9f94-cf47197505d4\" (UID: \"c83d1464-a979-48ab-9f94-cf47197505d4\") " Jan 30 17:29:04 crc kubenswrapper[4875]: I0130 17:29:04.968516 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-logs\") pod \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\" (UID: \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\") " Jan 30 17:29:04 crc kubenswrapper[4875]: I0130 17:29:04.968613 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-config-data\") pod \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\" (UID: \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\") " Jan 30 17:29:04 crc kubenswrapper[4875]: I0130 17:29:04.968690 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqcd6\" (UniqueName: \"kubernetes.io/projected/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-kube-api-access-wqcd6\") pod \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\" (UID: \"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569\") " Jan 30 17:29:04 crc kubenswrapper[4875]: I0130 17:29:04.968771 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83d1464-a979-48ab-9f94-cf47197505d4-logs\") pod \"c83d1464-a979-48ab-9f94-cf47197505d4\" (UID: \"c83d1464-a979-48ab-9f94-cf47197505d4\") " Jan 30 17:29:04 crc kubenswrapper[4875]: I0130 17:29:04.968877 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83d1464-a979-48ab-9f94-cf47197505d4-config-data\") pod \"c83d1464-a979-48ab-9f94-cf47197505d4\" (UID: \"c83d1464-a979-48ab-9f94-cf47197505d4\") " Jan 30 17:29:04 crc kubenswrapper[4875]: I0130 17:29:04.969117 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-logs" (OuterVolumeSpecName: "logs") pod "4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" (UID: "4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:29:04 crc kubenswrapper[4875]: I0130 17:29:04.969389 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:04 crc kubenswrapper[4875]: I0130 17:29:04.970135 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c83d1464-a979-48ab-9f94-cf47197505d4-logs" (OuterVolumeSpecName: "logs") pod "c83d1464-a979-48ab-9f94-cf47197505d4" (UID: "c83d1464-a979-48ab-9f94-cf47197505d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:29:04 crc kubenswrapper[4875]: I0130 17:29:04.991854 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-kube-api-access-wqcd6" (OuterVolumeSpecName: "kube-api-access-wqcd6") pod "4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" (UID: "4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569"). InnerVolumeSpecName "kube-api-access-wqcd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:04 crc kubenswrapper[4875]: I0130 17:29:04.994477 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c83d1464-a979-48ab-9f94-cf47197505d4-kube-api-access-9vrdd" (OuterVolumeSpecName: "kube-api-access-9vrdd") pod "c83d1464-a979-48ab-9f94-cf47197505d4" (UID: "c83d1464-a979-48ab-9f94-cf47197505d4"). InnerVolumeSpecName "kube-api-access-9vrdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.000803 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83d1464-a979-48ab-9f94-cf47197505d4-config-data" (OuterVolumeSpecName: "config-data") pod "c83d1464-a979-48ab-9f94-cf47197505d4" (UID: "c83d1464-a979-48ab-9f94-cf47197505d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.003873 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-config-data" (OuterVolumeSpecName: "config-data") pod "4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" (UID: "4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.071127 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.071171 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqcd6\" (UniqueName: \"kubernetes.io/projected/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569-kube-api-access-wqcd6\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.071184 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83d1464-a979-48ab-9f94-cf47197505d4-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.071193 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83d1464-a979-48ab-9f94-cf47197505d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.071202 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vrdd\" (UniqueName: \"kubernetes.io/projected/c83d1464-a979-48ab-9f94-cf47197505d4-kube-api-access-9vrdd\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.334346 4875 generic.go:334] "Generic (PLEG): container finished" podID="c83d1464-a979-48ab-9f94-cf47197505d4" containerID="aeb2755c91de6adfa3e4b24597afdef3636220202ef682e50c9c604efa56dd4f" exitCode=0 Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.334421 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"c83d1464-a979-48ab-9f94-cf47197505d4","Type":"ContainerDied","Data":"aeb2755c91de6adfa3e4b24597afdef3636220202ef682e50c9c604efa56dd4f"} Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.334454 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"c83d1464-a979-48ab-9f94-cf47197505d4","Type":"ContainerDied","Data":"6636034b571d4c2e7c317828129e46232c2be044210dd6d8f4e109e4fab9f9f4"} Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.334475 4875 scope.go:117] "RemoveContainer" containerID="aeb2755c91de6adfa3e4b24597afdef3636220202ef682e50c9c604efa56dd4f" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.334629 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.346658 4875 generic.go:334] "Generic (PLEG): container finished" podID="4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" containerID="b7261f31d3bf51d99dd4b4117c0d5d665c678f1f13795e61d6d5839803be8b53" exitCode=0 Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.346710 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569","Type":"ContainerDied","Data":"b7261f31d3bf51d99dd4b4117c0d5d665c678f1f13795e61d6d5839803be8b53"} Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.346724 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.346744 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569","Type":"ContainerDied","Data":"4efe9fcbeb1921ae7550eb977ba7e239c5ad10578ff840d11e63571216147812"} Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.374606 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.384434 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.395325 4875 scope.go:117] "RemoveContainer" containerID="91fa6326c0e846cd326d2bbf1393d5fb94d35c13663eede9617b1fac709582d4" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.407091 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.411764 4875 scope.go:117] "RemoveContainer" containerID="aeb2755c91de6adfa3e4b24597afdef3636220202ef682e50c9c604efa56dd4f" Jan 30 17:29:05 crc kubenswrapper[4875]: E0130 17:29:05.412452 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeb2755c91de6adfa3e4b24597afdef3636220202ef682e50c9c604efa56dd4f\": container with ID starting with aeb2755c91de6adfa3e4b24597afdef3636220202ef682e50c9c604efa56dd4f not found: ID does not exist" containerID="aeb2755c91de6adfa3e4b24597afdef3636220202ef682e50c9c604efa56dd4f" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.412487 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeb2755c91de6adfa3e4b24597afdef3636220202ef682e50c9c604efa56dd4f"} err="failed to get container status \"aeb2755c91de6adfa3e4b24597afdef3636220202ef682e50c9c604efa56dd4f\": rpc error: code = NotFound desc = could not find container \"aeb2755c91de6adfa3e4b24597afdef3636220202ef682e50c9c604efa56dd4f\": container with ID starting with aeb2755c91de6adfa3e4b24597afdef3636220202ef682e50c9c604efa56dd4f not found: ID does not exist" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.412507 4875 scope.go:117] "RemoveContainer" containerID="91fa6326c0e846cd326d2bbf1393d5fb94d35c13663eede9617b1fac709582d4" Jan 30 17:29:05 crc kubenswrapper[4875]: E0130 17:29:05.412986 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91fa6326c0e846cd326d2bbf1393d5fb94d35c13663eede9617b1fac709582d4\": container with ID starting with 91fa6326c0e846cd326d2bbf1393d5fb94d35c13663eede9617b1fac709582d4 not found: ID does not exist" containerID="91fa6326c0e846cd326d2bbf1393d5fb94d35c13663eede9617b1fac709582d4" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.413142 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91fa6326c0e846cd326d2bbf1393d5fb94d35c13663eede9617b1fac709582d4"} err="failed to get container status \"91fa6326c0e846cd326d2bbf1393d5fb94d35c13663eede9617b1fac709582d4\": rpc error: code = NotFound desc = could not find container \"91fa6326c0e846cd326d2bbf1393d5fb94d35c13663eede9617b1fac709582d4\": container with ID starting with 91fa6326c0e846cd326d2bbf1393d5fb94d35c13663eede9617b1fac709582d4 not found: ID does not exist" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.413180 4875 scope.go:117] "RemoveContainer" containerID="b7261f31d3bf51d99dd4b4117c0d5d665c678f1f13795e61d6d5839803be8b53" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.415549 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.430735 4875 scope.go:117] "RemoveContainer" containerID="0e158db0bfbdfc89042eac9d3d7a6bced03e44f8fecb0b4bd6057d9aa628373f" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.445613 4875 scope.go:117] "RemoveContainer" containerID="b7261f31d3bf51d99dd4b4117c0d5d665c678f1f13795e61d6d5839803be8b53" Jan 30 17:29:05 crc kubenswrapper[4875]: E0130 17:29:05.446041 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7261f31d3bf51d99dd4b4117c0d5d665c678f1f13795e61d6d5839803be8b53\": container with ID starting with b7261f31d3bf51d99dd4b4117c0d5d665c678f1f13795e61d6d5839803be8b53 not found: ID does not exist" containerID="b7261f31d3bf51d99dd4b4117c0d5d665c678f1f13795e61d6d5839803be8b53" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.446090 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7261f31d3bf51d99dd4b4117c0d5d665c678f1f13795e61d6d5839803be8b53"} err="failed to get container status \"b7261f31d3bf51d99dd4b4117c0d5d665c678f1f13795e61d6d5839803be8b53\": rpc error: code = NotFound desc = could not find container \"b7261f31d3bf51d99dd4b4117c0d5d665c678f1f13795e61d6d5839803be8b53\": container with ID starting with b7261f31d3bf51d99dd4b4117c0d5d665c678f1f13795e61d6d5839803be8b53 not found: ID does not exist" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.446111 4875 scope.go:117] "RemoveContainer" containerID="0e158db0bfbdfc89042eac9d3d7a6bced03e44f8fecb0b4bd6057d9aa628373f" Jan 30 17:29:05 crc kubenswrapper[4875]: E0130 17:29:05.446395 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e158db0bfbdfc89042eac9d3d7a6bced03e44f8fecb0b4bd6057d9aa628373f\": container with ID starting with 0e158db0bfbdfc89042eac9d3d7a6bced03e44f8fecb0b4bd6057d9aa628373f not found: ID does not exist" containerID="0e158db0bfbdfc89042eac9d3d7a6bced03e44f8fecb0b4bd6057d9aa628373f" Jan 30 17:29:05 crc kubenswrapper[4875]: I0130 17:29:05.446429 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e158db0bfbdfc89042eac9d3d7a6bced03e44f8fecb0b4bd6057d9aa628373f"} err="failed to get container status \"0e158db0bfbdfc89042eac9d3d7a6bced03e44f8fecb0b4bd6057d9aa628373f\": rpc error: code = NotFound desc = could not find container \"0e158db0bfbdfc89042eac9d3d7a6bced03e44f8fecb0b4bd6057d9aa628373f\": container with ID starting with 0e158db0bfbdfc89042eac9d3d7a6bced03e44f8fecb0b4bd6057d9aa628373f not found: ID does not exist" Jan 30 17:29:06 crc kubenswrapper[4875]: I0130 17:29:06.146038 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" path="/var/lib/kubelet/pods/4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569/volumes" Jan 30 17:29:06 crc kubenswrapper[4875]: I0130 17:29:06.146640 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83d1464-a979-48ab-9f94-cf47197505d4" path="/var/lib/kubelet/pods/c83d1464-a979-48ab-9f94-cf47197505d4/volumes" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.205448 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.310376 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0477cef3-a7d1-4497-8601-8245446e39a2-config-data\") pod \"0477cef3-a7d1-4497-8601-8245446e39a2\" (UID: \"0477cef3-a7d1-4497-8601-8245446e39a2\") " Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.310649 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k95k2\" (UniqueName: \"kubernetes.io/projected/0477cef3-a7d1-4497-8601-8245446e39a2-kube-api-access-k95k2\") pod \"0477cef3-a7d1-4497-8601-8245446e39a2\" (UID: \"0477cef3-a7d1-4497-8601-8245446e39a2\") " Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.317850 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0477cef3-a7d1-4497-8601-8245446e39a2-kube-api-access-k95k2" (OuterVolumeSpecName: "kube-api-access-k95k2") pod "0477cef3-a7d1-4497-8601-8245446e39a2" (UID: "0477cef3-a7d1-4497-8601-8245446e39a2"). InnerVolumeSpecName "kube-api-access-k95k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.365759 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0477cef3-a7d1-4497-8601-8245446e39a2-config-data" (OuterVolumeSpecName: "config-data") pod "0477cef3-a7d1-4497-8601-8245446e39a2" (UID: "0477cef3-a7d1-4497-8601-8245446e39a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.383884 4875 generic.go:334] "Generic (PLEG): container finished" podID="b9316fe4-f7f0-419c-95f0-1144284fad09" containerID="bcd10314f3ccef71c79e77546abed5f566274be35a94b903e61d1915107e2bdd" exitCode=0 Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.383972 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"b9316fe4-f7f0-419c-95f0-1144284fad09","Type":"ContainerDied","Data":"bcd10314f3ccef71c79e77546abed5f566274be35a94b903e61d1915107e2bdd"} Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.383998 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"b9316fe4-f7f0-419c-95f0-1144284fad09","Type":"ContainerDied","Data":"dcfed2539f1203b8476abcdd704e4997291806c00463d54d28c49cd8d39adf41"} Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.384009 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcfed2539f1203b8476abcdd704e4997291806c00463d54d28c49cd8d39adf41" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.396855 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.401041 4875 generic.go:334] "Generic (PLEG): container finished" podID="0477cef3-a7d1-4497-8601-8245446e39a2" containerID="3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc" exitCode=0 Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.401088 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"0477cef3-a7d1-4497-8601-8245446e39a2","Type":"ContainerDied","Data":"3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc"} Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.401132 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"0477cef3-a7d1-4497-8601-8245446e39a2","Type":"ContainerDied","Data":"a0c03e14f9c3b44b0471c46c7c972354274e30c5b34a64d8dbbc4bd39983e826"} Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.401152 4875 scope.go:117] "RemoveContainer" containerID="3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.401321 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.413567 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0477cef3-a7d1-4497-8601-8245446e39a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.413615 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k95k2\" (UniqueName: \"kubernetes.io/projected/0477cef3-a7d1-4497-8601-8245446e39a2-kube-api-access-k95k2\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.476253 4875 scope.go:117] "RemoveContainer" containerID="3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc" Jan 30 17:29:07 crc kubenswrapper[4875]: E0130 17:29:07.476828 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc\": container with ID starting with 3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc not found: ID does not exist" containerID="3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.476881 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc"} err="failed to get container status \"3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc\": rpc error: code = NotFound desc = could not find container \"3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc\": container with ID starting with 3d83a0868d812a988c999cc5225bf4cecaba70eed68df293543b1352d4adbccc not found: ID does not exist" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.483537 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.496195 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.528220 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9316fe4-f7f0-419c-95f0-1144284fad09-config-data\") pod \"b9316fe4-f7f0-419c-95f0-1144284fad09\" (UID: \"b9316fe4-f7f0-419c-95f0-1144284fad09\") " Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.528285 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htmt2\" (UniqueName: \"kubernetes.io/projected/b9316fe4-f7f0-419c-95f0-1144284fad09-kube-api-access-htmt2\") pod \"b9316fe4-f7f0-419c-95f0-1144284fad09\" (UID: \"b9316fe4-f7f0-419c-95f0-1144284fad09\") " Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.534866 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9316fe4-f7f0-419c-95f0-1144284fad09-kube-api-access-htmt2" (OuterVolumeSpecName: "kube-api-access-htmt2") pod "b9316fe4-f7f0-419c-95f0-1144284fad09" (UID: "b9316fe4-f7f0-419c-95f0-1144284fad09"). InnerVolumeSpecName "kube-api-access-htmt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.554459 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9316fe4-f7f0-419c-95f0-1144284fad09-config-data" (OuterVolumeSpecName: "config-data") pod "b9316fe4-f7f0-419c-95f0-1144284fad09" (UID: "b9316fe4-f7f0-419c-95f0-1144284fad09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.630127 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9316fe4-f7f0-419c-95f0-1144284fad09-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:07 crc kubenswrapper[4875]: I0130 17:29:07.630161 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htmt2\" (UniqueName: \"kubernetes.io/projected/b9316fe4-f7f0-419c-95f0-1144284fad09-kube-api-access-htmt2\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.087441 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.088062 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podUID="e1426e7d-e54e-492d-816c-1e8937cce809" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798" gracePeriod=30 Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.098877 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.099082 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podUID="58bd828d-3607-4a68-adb6-05c6e555631a" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d" gracePeriod=30 Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.117609 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.117827 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerName="nova-kuttl-metadata-log" containerID="cri-o://fc93cb39617ac268b8e7e71afc2c8b51b8cf3818487d03242c0b51e0f04a527b" gracePeriod=30 Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.117945 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://8eea69cc5640da551137353add9e0c6b0a39c59be7d790612653f527ad0011a1" gracePeriod=30 Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.132714 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.133044 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="f6222d09-d842-407b-97bd-d872fca5510d" containerName="nova-kuttl-metadata-log" containerID="cri-o://16de2dc9b2e33e043e0e3802ca864401fa7e35279379ee1ba2227610c9cea1f6" gracePeriod=30 Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.133088 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="f6222d09-d842-407b-97bd-d872fca5510d" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://008c83a3f5adb1c42e9a4347c401990325ba3b9906a7da2976a264b27ec58e00" gracePeriod=30 Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.148671 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0477cef3-a7d1-4497-8601-8245446e39a2" path="/var/lib/kubelet/pods/0477cef3-a7d1-4497-8601-8245446e39a2/volumes" Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.410142 4875 generic.go:334] "Generic (PLEG): container finished" podID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerID="fc93cb39617ac268b8e7e71afc2c8b51b8cf3818487d03242c0b51e0f04a527b" exitCode=143 Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.410224 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"86cb2eb9-9adf-4433-835a-7302ff4b13b2","Type":"ContainerDied","Data":"fc93cb39617ac268b8e7e71afc2c8b51b8cf3818487d03242c0b51e0f04a527b"} Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.412187 4875 generic.go:334] "Generic (PLEG): container finished" podID="f6222d09-d842-407b-97bd-d872fca5510d" containerID="16de2dc9b2e33e043e0e3802ca864401fa7e35279379ee1ba2227610c9cea1f6" exitCode=143 Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.412250 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"f6222d09-d842-407b-97bd-d872fca5510d","Type":"ContainerDied","Data":"16de2dc9b2e33e043e0e3802ca864401fa7e35279379ee1ba2227610c9cea1f6"} Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.413306 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.436183 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.444133 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.585158 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.585360 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podUID="7ba05e22-391a-4edd-b6d5-ca3964dfb482" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://d5ae90652a4e4dad809da2424a6015bec8f7c0c581d6ccb9a7625d3758b466fe" gracePeriod=30 Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.593320 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 30 17:29:08 crc kubenswrapper[4875]: I0130 17:29:08.593512 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podUID="b5008612-2354-43ed-a738-2eef9ae5b76e" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://6910fc8307c04489fa88c20354d73cbf945d26cfc30b944a4176cde01620fa23" gracePeriod=30 Jan 30 17:29:09 crc kubenswrapper[4875]: E0130 17:29:09.726039 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:29:09 crc kubenswrapper[4875]: E0130 17:29:09.729651 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:29:09 crc kubenswrapper[4875]: E0130 17:29:09.730762 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:29:09 crc kubenswrapper[4875]: E0130 17:29:09.730862 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podUID="58bd828d-3607-4a68-adb6-05c6e555631a" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:29:09 crc kubenswrapper[4875]: E0130 17:29:09.749233 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:29:09 crc kubenswrapper[4875]: E0130 17:29:09.750481 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:29:09 crc kubenswrapper[4875]: E0130 17:29:09.751723 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:29:09 crc kubenswrapper[4875]: E0130 17:29:09.751764 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podUID="e1426e7d-e54e-492d-816c-1e8937cce809" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:29:10 crc kubenswrapper[4875]: I0130 17:29:10.151740 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9316fe4-f7f0-419c-95f0-1144284fad09" path="/var/lib/kubelet/pods/b9316fe4-f7f0-419c-95f0-1144284fad09/volumes" Jan 30 17:29:10 crc kubenswrapper[4875]: E0130 17:29:10.924256 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5ae90652a4e4dad809da2424a6015bec8f7c0c581d6ccb9a7625d3758b466fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:29:10 crc kubenswrapper[4875]: E0130 17:29:10.925783 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5ae90652a4e4dad809da2424a6015bec8f7c0c581d6ccb9a7625d3758b466fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:29:10 crc kubenswrapper[4875]: E0130 17:29:10.927734 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5ae90652a4e4dad809da2424a6015bec8f7c0c581d6ccb9a7625d3758b466fe" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:29:10 crc kubenswrapper[4875]: E0130 17:29:10.927782 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podUID="7ba05e22-391a-4edd-b6d5-ca3964dfb482" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:29:10 crc kubenswrapper[4875]: E0130 17:29:10.953029 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6910fc8307c04489fa88c20354d73cbf945d26cfc30b944a4176cde01620fa23" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:29:10 crc kubenswrapper[4875]: E0130 17:29:10.954508 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6910fc8307c04489fa88c20354d73cbf945d26cfc30b944a4176cde01620fa23" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:29:10 crc kubenswrapper[4875]: E0130 17:29:10.955693 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6910fc8307c04489fa88c20354d73cbf945d26cfc30b944a4176cde01620fa23" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:29:10 crc kubenswrapper[4875]: E0130 17:29:10.955742 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podUID="b5008612-2354-43ed-a738-2eef9ae5b76e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.258357 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.181:8775/\": read tcp 10.217.0.2:43160->10.217.0.181:8775: read: connection reset by peer" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.258423 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.181:8775/\": read tcp 10.217.0.2:43170->10.217.0.181:8775: read: connection reset by peer" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.261716 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="f6222d09-d842-407b-97bd-d872fca5510d" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.182:8775/\": read tcp 10.217.0.2:49788->10.217.0.182:8775: read: connection reset by peer" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.261827 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="f6222d09-d842-407b-97bd-d872fca5510d" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.182:8775/\": read tcp 10.217.0.2:49800->10.217.0.182:8775: read: connection reset by peer" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.438769 4875 generic.go:334] "Generic (PLEG): container finished" podID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerID="8eea69cc5640da551137353add9e0c6b0a39c59be7d790612653f527ad0011a1" exitCode=0 Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.438848 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"86cb2eb9-9adf-4433-835a-7302ff4b13b2","Type":"ContainerDied","Data":"8eea69cc5640da551137353add9e0c6b0a39c59be7d790612653f527ad0011a1"} Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.441680 4875 generic.go:334] "Generic (PLEG): container finished" podID="f6222d09-d842-407b-97bd-d872fca5510d" containerID="008c83a3f5adb1c42e9a4347c401990325ba3b9906a7da2976a264b27ec58e00" exitCode=0 Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.441723 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"f6222d09-d842-407b-97bd-d872fca5510d","Type":"ContainerDied","Data":"008c83a3f5adb1c42e9a4347c401990325ba3b9906a7da2976a264b27ec58e00"} Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.807811 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.814779 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.905665 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6222d09-d842-407b-97bd-d872fca5510d-logs\") pod \"f6222d09-d842-407b-97bd-d872fca5510d\" (UID: \"f6222d09-d842-407b-97bd-d872fca5510d\") " Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.905757 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb2eb9-9adf-4433-835a-7302ff4b13b2-config-data\") pod \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\" (UID: \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\") " Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.905863 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbgtl\" (UniqueName: \"kubernetes.io/projected/f6222d09-d842-407b-97bd-d872fca5510d-kube-api-access-rbgtl\") pod \"f6222d09-d842-407b-97bd-d872fca5510d\" (UID: \"f6222d09-d842-407b-97bd-d872fca5510d\") " Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.905930 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4l5w\" (UniqueName: \"kubernetes.io/projected/86cb2eb9-9adf-4433-835a-7302ff4b13b2-kube-api-access-k4l5w\") pod \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\" (UID: \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\") " Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.906017 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cb2eb9-9adf-4433-835a-7302ff4b13b2-logs\") pod \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\" (UID: \"86cb2eb9-9adf-4433-835a-7302ff4b13b2\") " Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.906155 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6222d09-d842-407b-97bd-d872fca5510d-config-data\") pod \"f6222d09-d842-407b-97bd-d872fca5510d\" (UID: \"f6222d09-d842-407b-97bd-d872fca5510d\") " Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.906279 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6222d09-d842-407b-97bd-d872fca5510d-logs" (OuterVolumeSpecName: "logs") pod "f6222d09-d842-407b-97bd-d872fca5510d" (UID: "f6222d09-d842-407b-97bd-d872fca5510d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.906905 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86cb2eb9-9adf-4433-835a-7302ff4b13b2-logs" (OuterVolumeSpecName: "logs") pod "86cb2eb9-9adf-4433-835a-7302ff4b13b2" (UID: "86cb2eb9-9adf-4433-835a-7302ff4b13b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.907345 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6222d09-d842-407b-97bd-d872fca5510d-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.907370 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86cb2eb9-9adf-4433-835a-7302ff4b13b2-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.912818 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86cb2eb9-9adf-4433-835a-7302ff4b13b2-kube-api-access-k4l5w" (OuterVolumeSpecName: "kube-api-access-k4l5w") pod "86cb2eb9-9adf-4433-835a-7302ff4b13b2" (UID: "86cb2eb9-9adf-4433-835a-7302ff4b13b2"). InnerVolumeSpecName "kube-api-access-k4l5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.914882 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6222d09-d842-407b-97bd-d872fca5510d-kube-api-access-rbgtl" (OuterVolumeSpecName: "kube-api-access-rbgtl") pod "f6222d09-d842-407b-97bd-d872fca5510d" (UID: "f6222d09-d842-407b-97bd-d872fca5510d"). InnerVolumeSpecName "kube-api-access-rbgtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.928540 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6222d09-d842-407b-97bd-d872fca5510d-config-data" (OuterVolumeSpecName: "config-data") pod "f6222d09-d842-407b-97bd-d872fca5510d" (UID: "f6222d09-d842-407b-97bd-d872fca5510d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:11 crc kubenswrapper[4875]: I0130 17:29:11.939812 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86cb2eb9-9adf-4433-835a-7302ff4b13b2-config-data" (OuterVolumeSpecName: "config-data") pod "86cb2eb9-9adf-4433-835a-7302ff4b13b2" (UID: "86cb2eb9-9adf-4433-835a-7302ff4b13b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.008460 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6222d09-d842-407b-97bd-d872fca5510d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.008504 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86cb2eb9-9adf-4433-835a-7302ff4b13b2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.008518 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbgtl\" (UniqueName: \"kubernetes.io/projected/f6222d09-d842-407b-97bd-d872fca5510d-kube-api-access-rbgtl\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.008531 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4l5w\" (UniqueName: \"kubernetes.io/projected/86cb2eb9-9adf-4433-835a-7302ff4b13b2-kube-api-access-k4l5w\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.452209 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"86cb2eb9-9adf-4433-835a-7302ff4b13b2","Type":"ContainerDied","Data":"bf81d305353e66faa93566fcb72180941bd8d226711c52e88e82b4deb0e3e13f"} Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.452272 4875 scope.go:117] "RemoveContainer" containerID="8eea69cc5640da551137353add9e0c6b0a39c59be7d790612653f527ad0011a1" Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.452381 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.457003 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"f6222d09-d842-407b-97bd-d872fca5510d","Type":"ContainerDied","Data":"f3bef6f5927251e0503aed6bfb09ae82d5a495c563f7172fc084ce0b1df92152"} Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.457053 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.515221 4875 scope.go:117] "RemoveContainer" containerID="fc93cb39617ac268b8e7e71afc2c8b51b8cf3818487d03242c0b51e0f04a527b" Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.536331 4875 scope.go:117] "RemoveContainer" containerID="008c83a3f5adb1c42e9a4347c401990325ba3b9906a7da2976a264b27ec58e00" Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.544623 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.552160 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.559726 4875 scope.go:117] "RemoveContainer" containerID="16de2dc9b2e33e043e0e3802ca864401fa7e35279379ee1ba2227610c9cea1f6" Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.560437 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 30 17:29:12 crc kubenswrapper[4875]: I0130 17:29:12.570173 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.117638 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.171192 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.233845 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxzb5\" (UniqueName: \"kubernetes.io/projected/e1426e7d-e54e-492d-816c-1e8937cce809-kube-api-access-dxzb5\") pod \"e1426e7d-e54e-492d-816c-1e8937cce809\" (UID: \"e1426e7d-e54e-492d-816c-1e8937cce809\") " Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.234020 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1426e7d-e54e-492d-816c-1e8937cce809-config-data\") pod \"e1426e7d-e54e-492d-816c-1e8937cce809\" (UID: \"e1426e7d-e54e-492d-816c-1e8937cce809\") " Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.239361 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1426e7d-e54e-492d-816c-1e8937cce809-kube-api-access-dxzb5" (OuterVolumeSpecName: "kube-api-access-dxzb5") pod "e1426e7d-e54e-492d-816c-1e8937cce809" (UID: "e1426e7d-e54e-492d-816c-1e8937cce809"). InnerVolumeSpecName "kube-api-access-dxzb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.258689 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1426e7d-e54e-492d-816c-1e8937cce809-config-data" (OuterVolumeSpecName: "config-data") pod "e1426e7d-e54e-492d-816c-1e8937cce809" (UID: "e1426e7d-e54e-492d-816c-1e8937cce809"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.334949 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmvfg\" (UniqueName: \"kubernetes.io/projected/58bd828d-3607-4a68-adb6-05c6e555631a-kube-api-access-qmvfg\") pod \"58bd828d-3607-4a68-adb6-05c6e555631a\" (UID: \"58bd828d-3607-4a68-adb6-05c6e555631a\") " Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.335021 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58bd828d-3607-4a68-adb6-05c6e555631a-config-data\") pod \"58bd828d-3607-4a68-adb6-05c6e555631a\" (UID: \"58bd828d-3607-4a68-adb6-05c6e555631a\") " Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.335422 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxzb5\" (UniqueName: \"kubernetes.io/projected/e1426e7d-e54e-492d-816c-1e8937cce809-kube-api-access-dxzb5\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.335438 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1426e7d-e54e-492d-816c-1e8937cce809-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.337747 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58bd828d-3607-4a68-adb6-05c6e555631a-kube-api-access-qmvfg" (OuterVolumeSpecName: "kube-api-access-qmvfg") pod "58bd828d-3607-4a68-adb6-05c6e555631a" (UID: "58bd828d-3607-4a68-adb6-05c6e555631a"). InnerVolumeSpecName "kube-api-access-qmvfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.356471 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58bd828d-3607-4a68-adb6-05c6e555631a-config-data" (OuterVolumeSpecName: "config-data") pod "58bd828d-3607-4a68-adb6-05c6e555631a" (UID: "58bd828d-3607-4a68-adb6-05c6e555631a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.436871 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmvfg\" (UniqueName: \"kubernetes.io/projected/58bd828d-3607-4a68-adb6-05c6e555631a-kube-api-access-qmvfg\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.437231 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58bd828d-3607-4a68-adb6-05c6e555631a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.468145 4875 generic.go:334] "Generic (PLEG): container finished" podID="b5008612-2354-43ed-a738-2eef9ae5b76e" containerID="6910fc8307c04489fa88c20354d73cbf945d26cfc30b944a4176cde01620fa23" exitCode=0 Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.468262 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"b5008612-2354-43ed-a738-2eef9ae5b76e","Type":"ContainerDied","Data":"6910fc8307c04489fa88c20354d73cbf945d26cfc30b944a4176cde01620fa23"} Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.469943 4875 generic.go:334] "Generic (PLEG): container finished" podID="7ba05e22-391a-4edd-b6d5-ca3964dfb482" containerID="d5ae90652a4e4dad809da2424a6015bec8f7c0c581d6ccb9a7625d3758b466fe" exitCode=0 Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.470065 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"7ba05e22-391a-4edd-b6d5-ca3964dfb482","Type":"ContainerDied","Data":"d5ae90652a4e4dad809da2424a6015bec8f7c0c581d6ccb9a7625d3758b466fe"} Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.472148 4875 generic.go:334] "Generic (PLEG): container finished" podID="58bd828d-3607-4a68-adb6-05c6e555631a" containerID="8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d" exitCode=0 Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.472183 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"58bd828d-3607-4a68-adb6-05c6e555631a","Type":"ContainerDied","Data":"8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d"} Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.472218 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"58bd828d-3607-4a68-adb6-05c6e555631a","Type":"ContainerDied","Data":"a77c99c4e073e0b2ad02dc6b22d56126f5582ba536dfaed45aeec3279122e84f"} Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.472254 4875 scope.go:117] "RemoveContainer" containerID="8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.472434 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.473536 4875 generic.go:334] "Generic (PLEG): container finished" podID="e1426e7d-e54e-492d-816c-1e8937cce809" containerID="fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798" exitCode=0 Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.473630 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"e1426e7d-e54e-492d-816c-1e8937cce809","Type":"ContainerDied","Data":"fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798"} Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.473670 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.473692 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"e1426e7d-e54e-492d-816c-1e8937cce809","Type":"ContainerDied","Data":"f56e859248ee999de409681c7cec3829c2de7c7e4db0b8edad512b8f0245d38f"} Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.532735 4875 scope.go:117] "RemoveContainer" containerID="8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d" Jan 30 17:29:13 crc kubenswrapper[4875]: E0130 17:29:13.533199 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d\": container with ID starting with 8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d not found: ID does not exist" containerID="8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.533229 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d"} err="failed to get container status \"8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d\": rpc error: code = NotFound desc = could not find container \"8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d\": container with ID starting with 8f53284682ea66a25e42ba446f819be5728bf1b9abf7c463e70ccf01d300827d not found: ID does not exist" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.533252 4875 scope.go:117] "RemoveContainer" containerID="fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.537429 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.547066 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.557973 4875 scope.go:117] "RemoveContainer" containerID="fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.559404 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.566175 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 30 17:29:13 crc kubenswrapper[4875]: E0130 17:29:13.597716 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798\": container with ID starting with fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798 not found: ID does not exist" containerID="fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.597782 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798"} err="failed to get container status \"fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798\": rpc error: code = NotFound desc = could not find container \"fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798\": container with ID starting with fe16689d16b3b6af17c350eb60b10b60390570bd7bb25d05e0b81b18b7ad3798 not found: ID does not exist" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.816499 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.821017 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.951946 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-962ns\" (UniqueName: \"kubernetes.io/projected/7ba05e22-391a-4edd-b6d5-ca3964dfb482-kube-api-access-962ns\") pod \"7ba05e22-391a-4edd-b6d5-ca3964dfb482\" (UID: \"7ba05e22-391a-4edd-b6d5-ca3964dfb482\") " Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.952222 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5008612-2354-43ed-a738-2eef9ae5b76e-config-data\") pod \"b5008612-2354-43ed-a738-2eef9ae5b76e\" (UID: \"b5008612-2354-43ed-a738-2eef9ae5b76e\") " Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.952281 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pbx4\" (UniqueName: \"kubernetes.io/projected/b5008612-2354-43ed-a738-2eef9ae5b76e-kube-api-access-8pbx4\") pod \"b5008612-2354-43ed-a738-2eef9ae5b76e\" (UID: \"b5008612-2354-43ed-a738-2eef9ae5b76e\") " Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.952407 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ba05e22-391a-4edd-b6d5-ca3964dfb482-config-data\") pod \"7ba05e22-391a-4edd-b6d5-ca3964dfb482\" (UID: \"7ba05e22-391a-4edd-b6d5-ca3964dfb482\") " Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.956185 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ba05e22-391a-4edd-b6d5-ca3964dfb482-kube-api-access-962ns" (OuterVolumeSpecName: "kube-api-access-962ns") pod "7ba05e22-391a-4edd-b6d5-ca3964dfb482" (UID: "7ba05e22-391a-4edd-b6d5-ca3964dfb482"). InnerVolumeSpecName "kube-api-access-962ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.957396 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5008612-2354-43ed-a738-2eef9ae5b76e-kube-api-access-8pbx4" (OuterVolumeSpecName: "kube-api-access-8pbx4") pod "b5008612-2354-43ed-a738-2eef9ae5b76e" (UID: "b5008612-2354-43ed-a738-2eef9ae5b76e"). InnerVolumeSpecName "kube-api-access-8pbx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.972552 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ba05e22-391a-4edd-b6d5-ca3964dfb482-config-data" (OuterVolumeSpecName: "config-data") pod "7ba05e22-391a-4edd-b6d5-ca3964dfb482" (UID: "7ba05e22-391a-4edd-b6d5-ca3964dfb482"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:13 crc kubenswrapper[4875]: I0130 17:29:13.973213 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5008612-2354-43ed-a738-2eef9ae5b76e-config-data" (OuterVolumeSpecName: "config-data") pod "b5008612-2354-43ed-a738-2eef9ae5b76e" (UID: "b5008612-2354-43ed-a738-2eef9ae5b76e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.054695 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ba05e22-391a-4edd-b6d5-ca3964dfb482-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.054743 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-962ns\" (UniqueName: \"kubernetes.io/projected/7ba05e22-391a-4edd-b6d5-ca3964dfb482-kube-api-access-962ns\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.054758 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5008612-2354-43ed-a738-2eef9ae5b76e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.054771 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pbx4\" (UniqueName: \"kubernetes.io/projected/b5008612-2354-43ed-a738-2eef9ae5b76e-kube-api-access-8pbx4\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.146647 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58bd828d-3607-4a68-adb6-05c6e555631a" path="/var/lib/kubelet/pods/58bd828d-3607-4a68-adb6-05c6e555631a/volumes" Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.147267 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" path="/var/lib/kubelet/pods/86cb2eb9-9adf-4433-835a-7302ff4b13b2/volumes" Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.147899 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1426e7d-e54e-492d-816c-1e8937cce809" path="/var/lib/kubelet/pods/e1426e7d-e54e-492d-816c-1e8937cce809/volumes" Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.149073 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6222d09-d842-407b-97bd-d872fca5510d" path="/var/lib/kubelet/pods/f6222d09-d842-407b-97bd-d872fca5510d/volumes" Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.486707 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"b5008612-2354-43ed-a738-2eef9ae5b76e","Type":"ContainerDied","Data":"21789e436b82b570247166f6ad0aad4ac6cbd1071bfc1d9bc1599b1ea87faae7"} Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.486759 4875 scope.go:117] "RemoveContainer" containerID="6910fc8307c04489fa88c20354d73cbf945d26cfc30b944a4176cde01620fa23" Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.486803 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.488786 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"7ba05e22-391a-4edd-b6d5-ca3964dfb482","Type":"ContainerDied","Data":"07524a623bbf194edb9de133a1a370ec75b6a705d8b49248385e3ac62cffd5e8"} Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.488867 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.516105 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.527474 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.533052 4875 scope.go:117] "RemoveContainer" containerID="d5ae90652a4e4dad809da2424a6015bec8f7c0c581d6ccb9a7625d3758b466fe" Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.535483 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 30 17:29:14 crc kubenswrapper[4875]: I0130 17:29:14.543858 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 30 17:29:16 crc kubenswrapper[4875]: I0130 17:29:16.144709 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ba05e22-391a-4edd-b6d5-ca3964dfb482" path="/var/lib/kubelet/pods/7ba05e22-391a-4edd-b6d5-ca3964dfb482/volumes" Jan 30 17:29:16 crc kubenswrapper[4875]: I0130 17:29:16.145423 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5008612-2354-43ed-a738-2eef9ae5b76e" path="/var/lib/kubelet/pods/b5008612-2354-43ed-a738-2eef9ae5b76e/volumes" Jan 30 17:29:26 crc kubenswrapper[4875]: I0130 17:29:26.287507 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:29:26 crc kubenswrapper[4875]: I0130 17:29:26.288237 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:29:27 crc kubenswrapper[4875]: I0130 17:29:27.600275 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:29:27 crc kubenswrapper[4875]: I0130 17:29:27.600649 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="ace9a809-1aa0-434a-9dda-d54b391f0e04" containerName="nova-kuttl-api-log" containerID="cri-o://0ce73d8f741f081cda553a25f727fe277d7a01ad2d74c80444615dd734dec715" gracePeriod=30 Jan 30 17:29:27 crc kubenswrapper[4875]: I0130 17:29:27.600946 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="ace9a809-1aa0-434a-9dda-d54b391f0e04" containerName="nova-kuttl-api-api" containerID="cri-o://bd81be4df4049c1f946fbd3f606e7f77c6c8030971ebec5034a7edc59a1c0cfc" gracePeriod=30 Jan 30 17:29:27 crc kubenswrapper[4875]: I0130 17:29:27.991968 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:29:27 crc kubenswrapper[4875]: I0130 17:29:27.992525 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="3789d70d-0e1c-44e9-91f5-86c2c3dc4a33" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://a478c42709c6655749ea65795f5769f0aa2abb94d90394203f03e63050784459" gracePeriod=30 Jan 30 17:29:28 crc kubenswrapper[4875]: I0130 17:29:28.654408 4875 generic.go:334] "Generic (PLEG): container finished" podID="ace9a809-1aa0-434a-9dda-d54b391f0e04" containerID="0ce73d8f741f081cda553a25f727fe277d7a01ad2d74c80444615dd734dec715" exitCode=143 Jan 30 17:29:28 crc kubenswrapper[4875]: I0130 17:29:28.654469 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"ace9a809-1aa0-434a-9dda-d54b391f0e04","Type":"ContainerDied","Data":"0ce73d8f741f081cda553a25f727fe277d7a01ad2d74c80444615dd734dec715"} Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.334775 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.469901 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.518107 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace9a809-1aa0-434a-9dda-d54b391f0e04-config-data\") pod \"ace9a809-1aa0-434a-9dda-d54b391f0e04\" (UID: \"ace9a809-1aa0-434a-9dda-d54b391f0e04\") " Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.519120 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tfbm\" (UniqueName: \"kubernetes.io/projected/ace9a809-1aa0-434a-9dda-d54b391f0e04-kube-api-access-9tfbm\") pod \"ace9a809-1aa0-434a-9dda-d54b391f0e04\" (UID: \"ace9a809-1aa0-434a-9dda-d54b391f0e04\") " Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.519272 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace9a809-1aa0-434a-9dda-d54b391f0e04-logs\") pod \"ace9a809-1aa0-434a-9dda-d54b391f0e04\" (UID: \"ace9a809-1aa0-434a-9dda-d54b391f0e04\") " Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.520152 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace9a809-1aa0-434a-9dda-d54b391f0e04-logs" (OuterVolumeSpecName: "logs") pod "ace9a809-1aa0-434a-9dda-d54b391f0e04" (UID: "ace9a809-1aa0-434a-9dda-d54b391f0e04"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.523775 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace9a809-1aa0-434a-9dda-d54b391f0e04-kube-api-access-9tfbm" (OuterVolumeSpecName: "kube-api-access-9tfbm") pod "ace9a809-1aa0-434a-9dda-d54b391f0e04" (UID: "ace9a809-1aa0-434a-9dda-d54b391f0e04"). InnerVolumeSpecName "kube-api-access-9tfbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.540253 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace9a809-1aa0-434a-9dda-d54b391f0e04-config-data" (OuterVolumeSpecName: "config-data") pod "ace9a809-1aa0-434a-9dda-d54b391f0e04" (UID: "ace9a809-1aa0-434a-9dda-d54b391f0e04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.620811 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sms9r\" (UniqueName: \"kubernetes.io/projected/3789d70d-0e1c-44e9-91f5-86c2c3dc4a33-kube-api-access-sms9r\") pod \"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33\" (UID: \"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33\") " Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.620990 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3789d70d-0e1c-44e9-91f5-86c2c3dc4a33-config-data\") pod \"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33\" (UID: \"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33\") " Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.621320 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ace9a809-1aa0-434a-9dda-d54b391f0e04-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.621340 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tfbm\" (UniqueName: \"kubernetes.io/projected/ace9a809-1aa0-434a-9dda-d54b391f0e04-kube-api-access-9tfbm\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.621353 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ace9a809-1aa0-434a-9dda-d54b391f0e04-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.623969 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3789d70d-0e1c-44e9-91f5-86c2c3dc4a33-kube-api-access-sms9r" (OuterVolumeSpecName: "kube-api-access-sms9r") pod "3789d70d-0e1c-44e9-91f5-86c2c3dc4a33" (UID: "3789d70d-0e1c-44e9-91f5-86c2c3dc4a33"). InnerVolumeSpecName "kube-api-access-sms9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.642332 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3789d70d-0e1c-44e9-91f5-86c2c3dc4a33-config-data" (OuterVolumeSpecName: "config-data") pod "3789d70d-0e1c-44e9-91f5-86c2c3dc4a33" (UID: "3789d70d-0e1c-44e9-91f5-86c2c3dc4a33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.676732 4875 generic.go:334] "Generic (PLEG): container finished" podID="ace9a809-1aa0-434a-9dda-d54b391f0e04" containerID="bd81be4df4049c1f946fbd3f606e7f77c6c8030971ebec5034a7edc59a1c0cfc" exitCode=0 Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.676823 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"ace9a809-1aa0-434a-9dda-d54b391f0e04","Type":"ContainerDied","Data":"bd81be4df4049c1f946fbd3f606e7f77c6c8030971ebec5034a7edc59a1c0cfc"} Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.676858 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"ace9a809-1aa0-434a-9dda-d54b391f0e04","Type":"ContainerDied","Data":"ac782b00fe097aeeb614fc6bd22dc1b6694d9f05724f81edac28624615e77f02"} Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.676877 4875 scope.go:117] "RemoveContainer" containerID="bd81be4df4049c1f946fbd3f606e7f77c6c8030971ebec5034a7edc59a1c0cfc" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.677023 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.688602 4875 generic.go:334] "Generic (PLEG): container finished" podID="3789d70d-0e1c-44e9-91f5-86c2c3dc4a33" containerID="a478c42709c6655749ea65795f5769f0aa2abb94d90394203f03e63050784459" exitCode=0 Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.688641 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33","Type":"ContainerDied","Data":"a478c42709c6655749ea65795f5769f0aa2abb94d90394203f03e63050784459"} Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.688669 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"3789d70d-0e1c-44e9-91f5-86c2c3dc4a33","Type":"ContainerDied","Data":"8a8f401632fef95a064475357d5959f9c0c8dbfe6c0ca9be3c05db65a1fb1bf5"} Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.688723 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.714322 4875 scope.go:117] "RemoveContainer" containerID="0ce73d8f741f081cda553a25f727fe277d7a01ad2d74c80444615dd734dec715" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.727696 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sms9r\" (UniqueName: \"kubernetes.io/projected/3789d70d-0e1c-44e9-91f5-86c2c3dc4a33-kube-api-access-sms9r\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.727774 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3789d70d-0e1c-44e9-91f5-86c2c3dc4a33-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.736634 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.743846 4875 scope.go:117] "RemoveContainer" containerID="bd81be4df4049c1f946fbd3f606e7f77c6c8030971ebec5034a7edc59a1c0cfc" Jan 30 17:29:31 crc kubenswrapper[4875]: E0130 17:29:31.744284 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd81be4df4049c1f946fbd3f606e7f77c6c8030971ebec5034a7edc59a1c0cfc\": container with ID starting with bd81be4df4049c1f946fbd3f606e7f77c6c8030971ebec5034a7edc59a1c0cfc not found: ID does not exist" containerID="bd81be4df4049c1f946fbd3f606e7f77c6c8030971ebec5034a7edc59a1c0cfc" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.744310 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd81be4df4049c1f946fbd3f606e7f77c6c8030971ebec5034a7edc59a1c0cfc"} err="failed to get container status \"bd81be4df4049c1f946fbd3f606e7f77c6c8030971ebec5034a7edc59a1c0cfc\": rpc error: code = NotFound desc = could not find container \"bd81be4df4049c1f946fbd3f606e7f77c6c8030971ebec5034a7edc59a1c0cfc\": container with ID starting with bd81be4df4049c1f946fbd3f606e7f77c6c8030971ebec5034a7edc59a1c0cfc not found: ID does not exist" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.744332 4875 scope.go:117] "RemoveContainer" containerID="0ce73d8f741f081cda553a25f727fe277d7a01ad2d74c80444615dd734dec715" Jan 30 17:29:31 crc kubenswrapper[4875]: E0130 17:29:31.744800 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ce73d8f741f081cda553a25f727fe277d7a01ad2d74c80444615dd734dec715\": container with ID starting with 0ce73d8f741f081cda553a25f727fe277d7a01ad2d74c80444615dd734dec715 not found: ID does not exist" containerID="0ce73d8f741f081cda553a25f727fe277d7a01ad2d74c80444615dd734dec715" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.744856 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce73d8f741f081cda553a25f727fe277d7a01ad2d74c80444615dd734dec715"} err="failed to get container status \"0ce73d8f741f081cda553a25f727fe277d7a01ad2d74c80444615dd734dec715\": rpc error: code = NotFound desc = could not find container \"0ce73d8f741f081cda553a25f727fe277d7a01ad2d74c80444615dd734dec715\": container with ID starting with 0ce73d8f741f081cda553a25f727fe277d7a01ad2d74c80444615dd734dec715 not found: ID does not exist" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.744875 4875 scope.go:117] "RemoveContainer" containerID="a478c42709c6655749ea65795f5769f0aa2abb94d90394203f03e63050784459" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.756974 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.764748 4875 scope.go:117] "RemoveContainer" containerID="a478c42709c6655749ea65795f5769f0aa2abb94d90394203f03e63050784459" Jan 30 17:29:31 crc kubenswrapper[4875]: E0130 17:29:31.765243 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a478c42709c6655749ea65795f5769f0aa2abb94d90394203f03e63050784459\": container with ID starting with a478c42709c6655749ea65795f5769f0aa2abb94d90394203f03e63050784459 not found: ID does not exist" containerID="a478c42709c6655749ea65795f5769f0aa2abb94d90394203f03e63050784459" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.765277 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a478c42709c6655749ea65795f5769f0aa2abb94d90394203f03e63050784459"} err="failed to get container status \"a478c42709c6655749ea65795f5769f0aa2abb94d90394203f03e63050784459\": rpc error: code = NotFound desc = could not find container \"a478c42709c6655749ea65795f5769f0aa2abb94d90394203f03e63050784459\": container with ID starting with a478c42709c6655749ea65795f5769f0aa2abb94d90394203f03e63050784459 not found: ID does not exist" Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.765963 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.773554 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.987119 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:29:31 crc kubenswrapper[4875]: I0130 17:29:31.987356 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="017099d9-455f-4e89-b38a-1a5400faec32" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://5ea465c5a95c127d86e40c2ae4a590c6cfe38766e828127f7eb6c7ef453b2934" gracePeriod=30 Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.001744 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.001976 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a438b47-7d96-403c-ac75-74677da11940" containerName="nova-kuttl-metadata-log" containerID="cri-o://289218439f4dc907eddae46998a6f973ba70910c4a4afbe82102c54c26816af6" gracePeriod=30 Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.002145 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a438b47-7d96-403c-ac75-74677da11940" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://450f934f2841cc5acb8da606324e8c6ed944feb63e3df1e11ec43af20635b526" gracePeriod=30 Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.146958 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3789d70d-0e1c-44e9-91f5-86c2c3dc4a33" path="/var/lib/kubelet/pods/3789d70d-0e1c-44e9-91f5-86c2c3dc4a33/volumes" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.147795 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace9a809-1aa0-434a-9dda-d54b391f0e04" path="/var/lib/kubelet/pods/ace9a809-1aa0-434a-9dda-d54b391f0e04/volumes" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.234504 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.234765 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="08294a73-b9f7-404e-b0fa-7d5b85501c39" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://e10daa96c1106b7ea170767a20b18aebde5401981cf718737782da991d9a294f" gracePeriod=30 Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.699152 4875 generic.go:334] "Generic (PLEG): container finished" podID="8a438b47-7d96-403c-ac75-74677da11940" containerID="289218439f4dc907eddae46998a6f973ba70910c4a4afbe82102c54c26816af6" exitCode=143 Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.699191 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a438b47-7d96-403c-ac75-74677da11940","Type":"ContainerDied","Data":"289218439f4dc907eddae46998a6f973ba70910c4a4afbe82102c54c26816af6"} Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.795508 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76"] Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.813013 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2"] Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.824781 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-5jz76"] Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.833840 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-s96h2"] Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.916421 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell094f9-account-delete-zd7mh"] Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.916977 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerName="nova-kuttl-metadata-log" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.916998 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerName="nova-kuttl-metadata-log" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917013 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" containerName="nova-kuttl-api-api" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917023 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" containerName="nova-kuttl-api-api" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917040 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace9a809-1aa0-434a-9dda-d54b391f0e04" containerName="nova-kuttl-api-log" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917050 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace9a809-1aa0-434a-9dda-d54b391f0e04" containerName="nova-kuttl-api-log" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917064 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace9a809-1aa0-434a-9dda-d54b391f0e04" containerName="nova-kuttl-api-api" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917071 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace9a809-1aa0-434a-9dda-d54b391f0e04" containerName="nova-kuttl-api-api" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917080 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58bd828d-3607-4a68-adb6-05c6e555631a" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917087 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="58bd828d-3607-4a68-adb6-05c6e555631a" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917107 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83d1464-a979-48ab-9f94-cf47197505d4" containerName="nova-kuttl-api-log" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917114 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83d1464-a979-48ab-9f94-cf47197505d4" containerName="nova-kuttl-api-log" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917156 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3789d70d-0e1c-44e9-91f5-86c2c3dc4a33" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917165 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="3789d70d-0e1c-44e9-91f5-86c2c3dc4a33" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917175 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0477cef3-a7d1-4497-8601-8245446e39a2" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917182 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="0477cef3-a7d1-4497-8601-8245446e39a2" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917195 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9316fe4-f7f0-419c-95f0-1144284fad09" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917202 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9316fe4-f7f0-419c-95f0-1144284fad09" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917214 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6222d09-d842-407b-97bd-d872fca5510d" containerName="nova-kuttl-metadata-log" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917221 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6222d09-d842-407b-97bd-d872fca5510d" containerName="nova-kuttl-metadata-log" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917232 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5008612-2354-43ed-a738-2eef9ae5b76e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917240 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5008612-2354-43ed-a738-2eef9ae5b76e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917253 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerName="nova-kuttl-metadata-metadata" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917260 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerName="nova-kuttl-metadata-metadata" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917272 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" containerName="nova-kuttl-api-log" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917280 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" containerName="nova-kuttl-api-log" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917290 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6222d09-d842-407b-97bd-d872fca5510d" containerName="nova-kuttl-metadata-metadata" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917297 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6222d09-d842-407b-97bd-d872fca5510d" containerName="nova-kuttl-metadata-metadata" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917307 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ba05e22-391a-4edd-b6d5-ca3964dfb482" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917316 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ba05e22-391a-4edd-b6d5-ca3964dfb482" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917330 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83d1464-a979-48ab-9f94-cf47197505d4" containerName="nova-kuttl-api-api" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917337 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83d1464-a979-48ab-9f94-cf47197505d4" containerName="nova-kuttl-api-api" Jan 30 17:29:32 crc kubenswrapper[4875]: E0130 17:29:32.917346 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1426e7d-e54e-492d-816c-1e8937cce809" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917353 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1426e7d-e54e-492d-816c-1e8937cce809" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917525 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerName="nova-kuttl-metadata-metadata" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917537 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace9a809-1aa0-434a-9dda-d54b391f0e04" containerName="nova-kuttl-api-log" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917549 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83d1464-a979-48ab-9f94-cf47197505d4" containerName="nova-kuttl-api-log" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917560 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5008612-2354-43ed-a738-2eef9ae5b76e" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917573 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="58bd828d-3607-4a68-adb6-05c6e555631a" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917602 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6222d09-d842-407b-97bd-d872fca5510d" containerName="nova-kuttl-metadata-metadata" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917616 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace9a809-1aa0-434a-9dda-d54b391f0e04" containerName="nova-kuttl-api-api" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917627 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="3789d70d-0e1c-44e9-91f5-86c2c3dc4a33" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917638 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9316fe4-f7f0-419c-95f0-1144284fad09" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917662 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="0477cef3-a7d1-4497-8601-8245446e39a2" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917672 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" containerName="nova-kuttl-api-api" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917680 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83d1464-a979-48ab-9f94-cf47197505d4" containerName="nova-kuttl-api-api" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917690 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1426e7d-e54e-492d-816c-1e8937cce809" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917702 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="86cb2eb9-9adf-4433-835a-7302ff4b13b2" containerName="nova-kuttl-metadata-log" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917713 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6222d09-d842-407b-97bd-d872fca5510d" containerName="nova-kuttl-metadata-log" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917722 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ba05e22-391a-4edd-b6d5-ca3964dfb482" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.917734 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="4623fd43-ec9d-4b2a-b9d2-a92f1bdc7569" containerName="nova-kuttl-api-log" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.918435 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell094f9-account-delete-zd7mh" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.940156 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell094f9-account-delete-zd7mh"] Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.974812 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70107d0d-90c4-4114-8328-6a12baf5c7cd-operator-scripts\") pod \"novacell094f9-account-delete-zd7mh\" (UID: \"70107d0d-90c4-4114-8328-6a12baf5c7cd\") " pod="nova-kuttl-default/novacell094f9-account-delete-zd7mh" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.974867 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw2cj\" (UniqueName: \"kubernetes.io/projected/70107d0d-90c4-4114-8328-6a12baf5c7cd-kube-api-access-cw2cj\") pod \"novacell094f9-account-delete-zd7mh\" (UID: \"70107d0d-90c4-4114-8328-6a12baf5c7cd\") " pod="nova-kuttl-default/novacell094f9-account-delete-zd7mh" Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.985855 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapi7118-account-delete-zn4dq"] Jan 30 17:29:32 crc kubenswrapper[4875]: I0130 17:29:32.986867 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi7118-account-delete-zn4dq" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.011527 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi7118-account-delete-zn4dq"] Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.046837 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.047091 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="455921c1-b5b6-42e8-b050-920a49161c06" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://65042108355da07156c563cba5f86beb047c983b4d734a6d89638aa3420b2e31" gracePeriod=30 Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.077080 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw2cj\" (UniqueName: \"kubernetes.io/projected/70107d0d-90c4-4114-8328-6a12baf5c7cd-kube-api-access-cw2cj\") pod \"novacell094f9-account-delete-zd7mh\" (UID: \"70107d0d-90c4-4114-8328-6a12baf5c7cd\") " pod="nova-kuttl-default/novacell094f9-account-delete-zd7mh" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.077190 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jspnv\" (UniqueName: \"kubernetes.io/projected/ebc78f90-5aac-40d5-aca2-af2d7921f98e-kube-api-access-jspnv\") pod \"novaapi7118-account-delete-zn4dq\" (UID: \"ebc78f90-5aac-40d5-aca2-af2d7921f98e\") " pod="nova-kuttl-default/novaapi7118-account-delete-zn4dq" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.077226 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70107d0d-90c4-4114-8328-6a12baf5c7cd-operator-scripts\") pod \"novacell094f9-account-delete-zd7mh\" (UID: \"70107d0d-90c4-4114-8328-6a12baf5c7cd\") " pod="nova-kuttl-default/novacell094f9-account-delete-zd7mh" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.077255 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebc78f90-5aac-40d5-aca2-af2d7921f98e-operator-scripts\") pod \"novaapi7118-account-delete-zn4dq\" (UID: \"ebc78f90-5aac-40d5-aca2-af2d7921f98e\") " pod="nova-kuttl-default/novaapi7118-account-delete-zn4dq" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.083064 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70107d0d-90c4-4114-8328-6a12baf5c7cd-operator-scripts\") pod \"novacell094f9-account-delete-zd7mh\" (UID: \"70107d0d-90c4-4114-8328-6a12baf5c7cd\") " pod="nova-kuttl-default/novacell094f9-account-delete-zd7mh" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.085323 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w"] Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.108386 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-rrm2w"] Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.120282 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw2cj\" (UniqueName: \"kubernetes.io/projected/70107d0d-90c4-4114-8328-6a12baf5c7cd-kube-api-access-cw2cj\") pod \"novacell094f9-account-delete-zd7mh\" (UID: \"70107d0d-90c4-4114-8328-6a12baf5c7cd\") " pod="nova-kuttl-default/novacell094f9-account-delete-zd7mh" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.129631 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell188b2-account-delete-k9vrw"] Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.130529 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell188b2-account-delete-k9vrw" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.146574 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell188b2-account-delete-k9vrw"] Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.153079 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7"] Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.167704 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-95xf7"] Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.177999 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebc78f90-5aac-40d5-aca2-af2d7921f98e-operator-scripts\") pod \"novaapi7118-account-delete-zn4dq\" (UID: \"ebc78f90-5aac-40d5-aca2-af2d7921f98e\") " pod="nova-kuttl-default/novaapi7118-account-delete-zn4dq" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.178045 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2522l\" (UniqueName: \"kubernetes.io/projected/bbf4b428-7d59-448f-a818-c0bc51fbd99e-kube-api-access-2522l\") pod \"novacell188b2-account-delete-k9vrw\" (UID: \"bbf4b428-7d59-448f-a818-c0bc51fbd99e\") " pod="nova-kuttl-default/novacell188b2-account-delete-k9vrw" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.178097 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbf4b428-7d59-448f-a818-c0bc51fbd99e-operator-scripts\") pod \"novacell188b2-account-delete-k9vrw\" (UID: \"bbf4b428-7d59-448f-a818-c0bc51fbd99e\") " pod="nova-kuttl-default/novacell188b2-account-delete-k9vrw" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.178140 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jspnv\" (UniqueName: \"kubernetes.io/projected/ebc78f90-5aac-40d5-aca2-af2d7921f98e-kube-api-access-jspnv\") pod \"novaapi7118-account-delete-zn4dq\" (UID: \"ebc78f90-5aac-40d5-aca2-af2d7921f98e\") " pod="nova-kuttl-default/novaapi7118-account-delete-zn4dq" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.178773 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebc78f90-5aac-40d5-aca2-af2d7921f98e-operator-scripts\") pod \"novaapi7118-account-delete-zn4dq\" (UID: \"ebc78f90-5aac-40d5-aca2-af2d7921f98e\") " pod="nova-kuttl-default/novaapi7118-account-delete-zn4dq" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.225170 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jspnv\" (UniqueName: \"kubernetes.io/projected/ebc78f90-5aac-40d5-aca2-af2d7921f98e-kube-api-access-jspnv\") pod \"novaapi7118-account-delete-zn4dq\" (UID: \"ebc78f90-5aac-40d5-aca2-af2d7921f98e\") " pod="nova-kuttl-default/novaapi7118-account-delete-zn4dq" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.278354 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell094f9-account-delete-zd7mh" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.280179 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbf4b428-7d59-448f-a818-c0bc51fbd99e-operator-scripts\") pod \"novacell188b2-account-delete-k9vrw\" (UID: \"bbf4b428-7d59-448f-a818-c0bc51fbd99e\") " pod="nova-kuttl-default/novacell188b2-account-delete-k9vrw" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.280959 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2522l\" (UniqueName: \"kubernetes.io/projected/bbf4b428-7d59-448f-a818-c0bc51fbd99e-kube-api-access-2522l\") pod \"novacell188b2-account-delete-k9vrw\" (UID: \"bbf4b428-7d59-448f-a818-c0bc51fbd99e\") " pod="nova-kuttl-default/novacell188b2-account-delete-k9vrw" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.281494 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbf4b428-7d59-448f-a818-c0bc51fbd99e-operator-scripts\") pod \"novacell188b2-account-delete-k9vrw\" (UID: \"bbf4b428-7d59-448f-a818-c0bc51fbd99e\") " pod="nova-kuttl-default/novacell188b2-account-delete-k9vrw" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.308045 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2522l\" (UniqueName: \"kubernetes.io/projected/bbf4b428-7d59-448f-a818-c0bc51fbd99e-kube-api-access-2522l\") pod \"novacell188b2-account-delete-k9vrw\" (UID: \"bbf4b428-7d59-448f-a818-c0bc51fbd99e\") " pod="nova-kuttl-default/novacell188b2-account-delete-k9vrw" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.317540 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi7118-account-delete-zn4dq" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.340466 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="455921c1-b5b6-42e8-b050-920a49161c06" containerName="nova-kuttl-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"http://10.217.0.162:6080/vnc_lite.html\": dial tcp 10.217.0.162:6080: connect: connection refused" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.469140 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell188b2-account-delete-k9vrw" Jan 30 17:29:33 crc kubenswrapper[4875]: I0130 17:29:33.828777 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell094f9-account-delete-zd7mh"] Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.002858 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi7118-account-delete-zn4dq"] Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.113468 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell188b2-account-delete-k9vrw"] Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.177381 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0579fff9-2e84-4cb6-8a96-08144cfecf05" path="/var/lib/kubelet/pods/0579fff9-2e84-4cb6-8a96-08144cfecf05/volumes" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.179487 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="337d6735-5e62-440f-80dd-78cfee827806" path="/var/lib/kubelet/pods/337d6735-5e62-440f-80dd-78cfee827806/volumes" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.180853 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7" path="/var/lib/kubelet/pods/9b7a7531-ce9a-48cf-bdd3-9ba23d6b44e7/volumes" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.181780 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cebc96df-af7e-409f-94ea-aaa530661527" path="/var/lib/kubelet/pods/cebc96df-af7e-409f-94ea-aaa530661527/volumes" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.270276 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.370854 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5mvj\" (UniqueName: \"kubernetes.io/projected/455921c1-b5b6-42e8-b050-920a49161c06-kube-api-access-v5mvj\") pod \"455921c1-b5b6-42e8-b050-920a49161c06\" (UID: \"455921c1-b5b6-42e8-b050-920a49161c06\") " Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.370936 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455921c1-b5b6-42e8-b050-920a49161c06-config-data\") pod \"455921c1-b5b6-42e8-b050-920a49161c06\" (UID: \"455921c1-b5b6-42e8-b050-920a49161c06\") " Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.378752 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/455921c1-b5b6-42e8-b050-920a49161c06-kube-api-access-v5mvj" (OuterVolumeSpecName: "kube-api-access-v5mvj") pod "455921c1-b5b6-42e8-b050-920a49161c06" (UID: "455921c1-b5b6-42e8-b050-920a49161c06"). InnerVolumeSpecName "kube-api-access-v5mvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.403033 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/455921c1-b5b6-42e8-b050-920a49161c06-config-data" (OuterVolumeSpecName: "config-data") pod "455921c1-b5b6-42e8-b050-920a49161c06" (UID: "455921c1-b5b6-42e8-b050-920a49161c06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.456079 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.472802 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455921c1-b5b6-42e8-b050-920a49161c06-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.472835 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5mvj\" (UniqueName: \"kubernetes.io/projected/455921c1-b5b6-42e8-b050-920a49161c06-kube-api-access-v5mvj\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.573999 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08294a73-b9f7-404e-b0fa-7d5b85501c39-config-data\") pod \"08294a73-b9f7-404e-b0fa-7d5b85501c39\" (UID: \"08294a73-b9f7-404e-b0fa-7d5b85501c39\") " Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.574045 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9h5p\" (UniqueName: \"kubernetes.io/projected/08294a73-b9f7-404e-b0fa-7d5b85501c39-kube-api-access-g9h5p\") pod \"08294a73-b9f7-404e-b0fa-7d5b85501c39\" (UID: \"08294a73-b9f7-404e-b0fa-7d5b85501c39\") " Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.583666 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08294a73-b9f7-404e-b0fa-7d5b85501c39-kube-api-access-g9h5p" (OuterVolumeSpecName: "kube-api-access-g9h5p") pod "08294a73-b9f7-404e-b0fa-7d5b85501c39" (UID: "08294a73-b9f7-404e-b0fa-7d5b85501c39"). InnerVolumeSpecName "kube-api-access-g9h5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.594989 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08294a73-b9f7-404e-b0fa-7d5b85501c39-config-data" (OuterVolumeSpecName: "config-data") pod "08294a73-b9f7-404e-b0fa-7d5b85501c39" (UID: "08294a73-b9f7-404e-b0fa-7d5b85501c39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.675814 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08294a73-b9f7-404e-b0fa-7d5b85501c39-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.675860 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9h5p\" (UniqueName: \"kubernetes.io/projected/08294a73-b9f7-404e-b0fa-7d5b85501c39-kube-api-access-g9h5p\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.715787 4875 generic.go:334] "Generic (PLEG): container finished" podID="08294a73-b9f7-404e-b0fa-7d5b85501c39" containerID="e10daa96c1106b7ea170767a20b18aebde5401981cf718737782da991d9a294f" exitCode=0 Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.715840 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"08294a73-b9f7-404e-b0fa-7d5b85501c39","Type":"ContainerDied","Data":"e10daa96c1106b7ea170767a20b18aebde5401981cf718737782da991d9a294f"} Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.716200 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"08294a73-b9f7-404e-b0fa-7d5b85501c39","Type":"ContainerDied","Data":"6ee89fe1957fecd926cf196c0fea6bb996f4e6e2b923bf360264b13d4fae7a63"} Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.716223 4875 scope.go:117] "RemoveContainer" containerID="e10daa96c1106b7ea170767a20b18aebde5401981cf718737782da991d9a294f" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.715863 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.718698 4875 generic.go:334] "Generic (PLEG): container finished" podID="bbf4b428-7d59-448f-a818-c0bc51fbd99e" containerID="0ed3ba63f5836d4284e5d0b19fd95871dcfd9e7c9402f3725a2f46fcf70bad3f" exitCode=0 Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.718784 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell188b2-account-delete-k9vrw" event={"ID":"bbf4b428-7d59-448f-a818-c0bc51fbd99e","Type":"ContainerDied","Data":"0ed3ba63f5836d4284e5d0b19fd95871dcfd9e7c9402f3725a2f46fcf70bad3f"} Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.718823 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell188b2-account-delete-k9vrw" event={"ID":"bbf4b428-7d59-448f-a818-c0bc51fbd99e","Type":"ContainerStarted","Data":"a04231aacedb522a7db7ff23439a26245ecdbe8bddb1ddb38f8e45eee1328482"} Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.731335 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"455921c1-b5b6-42e8-b050-920a49161c06","Type":"ContainerDied","Data":"65042108355da07156c563cba5f86beb047c983b4d734a6d89638aa3420b2e31"} Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.731358 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.731266 4875 generic.go:334] "Generic (PLEG): container finished" podID="455921c1-b5b6-42e8-b050-920a49161c06" containerID="65042108355da07156c563cba5f86beb047c983b4d734a6d89638aa3420b2e31" exitCode=0 Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.732509 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"455921c1-b5b6-42e8-b050-920a49161c06","Type":"ContainerDied","Data":"87c14020b44158d01aeda0522715fdc0896fedd2fb5df044a2b86382d8b702d7"} Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.754378 4875 scope.go:117] "RemoveContainer" containerID="e10daa96c1106b7ea170767a20b18aebde5401981cf718737782da991d9a294f" Jan 30 17:29:34 crc kubenswrapper[4875]: E0130 17:29:34.761227 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e10daa96c1106b7ea170767a20b18aebde5401981cf718737782da991d9a294f\": container with ID starting with e10daa96c1106b7ea170767a20b18aebde5401981cf718737782da991d9a294f not found: ID does not exist" containerID="e10daa96c1106b7ea170767a20b18aebde5401981cf718737782da991d9a294f" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.761261 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e10daa96c1106b7ea170767a20b18aebde5401981cf718737782da991d9a294f"} err="failed to get container status \"e10daa96c1106b7ea170767a20b18aebde5401981cf718737782da991d9a294f\": rpc error: code = NotFound desc = could not find container \"e10daa96c1106b7ea170767a20b18aebde5401981cf718737782da991d9a294f\": container with ID starting with e10daa96c1106b7ea170767a20b18aebde5401981cf718737782da991d9a294f not found: ID does not exist" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.761286 4875 scope.go:117] "RemoveContainer" containerID="65042108355da07156c563cba5f86beb047c983b4d734a6d89638aa3420b2e31" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.762030 4875 generic.go:334] "Generic (PLEG): container finished" podID="70107d0d-90c4-4114-8328-6a12baf5c7cd" containerID="07b8cba6e8c49c3765f2197ce04d8326c1bb62115679b813ba0ebb0bca908f86" exitCode=0 Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.762627 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell094f9-account-delete-zd7mh" event={"ID":"70107d0d-90c4-4114-8328-6a12baf5c7cd","Type":"ContainerDied","Data":"07b8cba6e8c49c3765f2197ce04d8326c1bb62115679b813ba0ebb0bca908f86"} Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.762655 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell094f9-account-delete-zd7mh" event={"ID":"70107d0d-90c4-4114-8328-6a12baf5c7cd","Type":"ContainerStarted","Data":"06ed4a15f1c17cffc12ab8fe5a87eef0af44683384466a1af997f4da40f70f4a"} Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.765381 4875 generic.go:334] "Generic (PLEG): container finished" podID="ebc78f90-5aac-40d5-aca2-af2d7921f98e" containerID="b6d156423146bb231253ca2e721349b2e472892e0e5224a367739c8da335d009" exitCode=0 Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.765419 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi7118-account-delete-zn4dq" event={"ID":"ebc78f90-5aac-40d5-aca2-af2d7921f98e","Type":"ContainerDied","Data":"b6d156423146bb231253ca2e721349b2e472892e0e5224a367739c8da335d009"} Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.765442 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi7118-account-delete-zn4dq" event={"ID":"ebc78f90-5aac-40d5-aca2-af2d7921f98e","Type":"ContainerStarted","Data":"1dc498bc0c8521b86ccf16814c8b0505a2eeff9a67f691b061c641d8e0345d2a"} Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.771790 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.783843 4875 scope.go:117] "RemoveContainer" containerID="65042108355da07156c563cba5f86beb047c983b4d734a6d89638aa3420b2e31" Jan 30 17:29:34 crc kubenswrapper[4875]: E0130 17:29:34.785300 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65042108355da07156c563cba5f86beb047c983b4d734a6d89638aa3420b2e31\": container with ID starting with 65042108355da07156c563cba5f86beb047c983b4d734a6d89638aa3420b2e31 not found: ID does not exist" containerID="65042108355da07156c563cba5f86beb047c983b4d734a6d89638aa3420b2e31" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.785347 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65042108355da07156c563cba5f86beb047c983b4d734a6d89638aa3420b2e31"} err="failed to get container status \"65042108355da07156c563cba5f86beb047c983b4d734a6d89638aa3420b2e31\": rpc error: code = NotFound desc = could not find container \"65042108355da07156c563cba5f86beb047c983b4d734a6d89638aa3420b2e31\": container with ID starting with 65042108355da07156c563cba5f86beb047c983b4d734a6d89638aa3420b2e31 not found: ID does not exist" Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.800411 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.834453 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:29:34 crc kubenswrapper[4875]: I0130 17:29:34.846368 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.134357 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a438b47-7d96-403c-ac75-74677da11940" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.172:8775/\": read tcp 10.217.0.2:45918->10.217.0.172:8775: read: connection reset by peer" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.137461 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a438b47-7d96-403c-ac75-74677da11940" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.172:8775/\": read tcp 10.217.0.2:45934->10.217.0.172:8775: read: connection reset by peer" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.268573 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.388397 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017099d9-455f-4e89-b38a-1a5400faec32-config-data\") pod \"017099d9-455f-4e89-b38a-1a5400faec32\" (UID: \"017099d9-455f-4e89-b38a-1a5400faec32\") " Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.388548 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpxff\" (UniqueName: \"kubernetes.io/projected/017099d9-455f-4e89-b38a-1a5400faec32-kube-api-access-tpxff\") pod \"017099d9-455f-4e89-b38a-1a5400faec32\" (UID: \"017099d9-455f-4e89-b38a-1a5400faec32\") " Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.393839 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/017099d9-455f-4e89-b38a-1a5400faec32-kube-api-access-tpxff" (OuterVolumeSpecName: "kube-api-access-tpxff") pod "017099d9-455f-4e89-b38a-1a5400faec32" (UID: "017099d9-455f-4e89-b38a-1a5400faec32"). InnerVolumeSpecName "kube-api-access-tpxff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.410213 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017099d9-455f-4e89-b38a-1a5400faec32-config-data" (OuterVolumeSpecName: "config-data") pod "017099d9-455f-4e89-b38a-1a5400faec32" (UID: "017099d9-455f-4e89-b38a-1a5400faec32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.490394 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpxff\" (UniqueName: \"kubernetes.io/projected/017099d9-455f-4e89-b38a-1a5400faec32-kube-api-access-tpxff\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.490765 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/017099d9-455f-4e89-b38a-1a5400faec32-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.505287 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.592392 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a438b47-7d96-403c-ac75-74677da11940-config-data\") pod \"8a438b47-7d96-403c-ac75-74677da11940\" (UID: \"8a438b47-7d96-403c-ac75-74677da11940\") " Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.592463 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a438b47-7d96-403c-ac75-74677da11940-logs\") pod \"8a438b47-7d96-403c-ac75-74677da11940\" (UID: \"8a438b47-7d96-403c-ac75-74677da11940\") " Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.592523 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shhdw\" (UniqueName: \"kubernetes.io/projected/8a438b47-7d96-403c-ac75-74677da11940-kube-api-access-shhdw\") pod \"8a438b47-7d96-403c-ac75-74677da11940\" (UID: \"8a438b47-7d96-403c-ac75-74677da11940\") " Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.593106 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a438b47-7d96-403c-ac75-74677da11940-logs" (OuterVolumeSpecName: "logs") pod "8a438b47-7d96-403c-ac75-74677da11940" (UID: "8a438b47-7d96-403c-ac75-74677da11940"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.595221 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a438b47-7d96-403c-ac75-74677da11940-kube-api-access-shhdw" (OuterVolumeSpecName: "kube-api-access-shhdw") pod "8a438b47-7d96-403c-ac75-74677da11940" (UID: "8a438b47-7d96-403c-ac75-74677da11940"). InnerVolumeSpecName "kube-api-access-shhdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.610036 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a438b47-7d96-403c-ac75-74677da11940-config-data" (OuterVolumeSpecName: "config-data") pod "8a438b47-7d96-403c-ac75-74677da11940" (UID: "8a438b47-7d96-403c-ac75-74677da11940"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.694774 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a438b47-7d96-403c-ac75-74677da11940-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.694801 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a438b47-7d96-403c-ac75-74677da11940-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.694810 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shhdw\" (UniqueName: \"kubernetes.io/projected/8a438b47-7d96-403c-ac75-74677da11940-kube-api-access-shhdw\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.774458 4875 generic.go:334] "Generic (PLEG): container finished" podID="8a438b47-7d96-403c-ac75-74677da11940" containerID="450f934f2841cc5acb8da606324e8c6ed944feb63e3df1e11ec43af20635b526" exitCode=0 Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.774513 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.774520 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a438b47-7d96-403c-ac75-74677da11940","Type":"ContainerDied","Data":"450f934f2841cc5acb8da606324e8c6ed944feb63e3df1e11ec43af20635b526"} Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.774646 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a438b47-7d96-403c-ac75-74677da11940","Type":"ContainerDied","Data":"34278ba6e30cd10bc3d6def9f4e3c500cf237576335399756aeec1d7988b8b7e"} Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.774680 4875 scope.go:117] "RemoveContainer" containerID="450f934f2841cc5acb8da606324e8c6ed944feb63e3df1e11ec43af20635b526" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.778389 4875 generic.go:334] "Generic (PLEG): container finished" podID="017099d9-455f-4e89-b38a-1a5400faec32" containerID="5ea465c5a95c127d86e40c2ae4a590c6cfe38766e828127f7eb6c7ef453b2934" exitCode=0 Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.778443 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.778432 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"017099d9-455f-4e89-b38a-1a5400faec32","Type":"ContainerDied","Data":"5ea465c5a95c127d86e40c2ae4a590c6cfe38766e828127f7eb6c7ef453b2934"} Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.778513 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"017099d9-455f-4e89-b38a-1a5400faec32","Type":"ContainerDied","Data":"d5f1ede68a7d5c4b07e7080736a1bf069f7dd291471ebbcf20234f4710b64de3"} Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.796632 4875 scope.go:117] "RemoveContainer" containerID="289218439f4dc907eddae46998a6f973ba70910c4a4afbe82102c54c26816af6" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.808238 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.817644 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.823569 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.825167 4875 scope.go:117] "RemoveContainer" containerID="450f934f2841cc5acb8da606324e8c6ed944feb63e3df1e11ec43af20635b526" Jan 30 17:29:35 crc kubenswrapper[4875]: E0130 17:29:35.825666 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450f934f2841cc5acb8da606324e8c6ed944feb63e3df1e11ec43af20635b526\": container with ID starting with 450f934f2841cc5acb8da606324e8c6ed944feb63e3df1e11ec43af20635b526 not found: ID does not exist" containerID="450f934f2841cc5acb8da606324e8c6ed944feb63e3df1e11ec43af20635b526" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.825701 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450f934f2841cc5acb8da606324e8c6ed944feb63e3df1e11ec43af20635b526"} err="failed to get container status \"450f934f2841cc5acb8da606324e8c6ed944feb63e3df1e11ec43af20635b526\": rpc error: code = NotFound desc = could not find container \"450f934f2841cc5acb8da606324e8c6ed944feb63e3df1e11ec43af20635b526\": container with ID starting with 450f934f2841cc5acb8da606324e8c6ed944feb63e3df1e11ec43af20635b526 not found: ID does not exist" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.825726 4875 scope.go:117] "RemoveContainer" containerID="289218439f4dc907eddae46998a6f973ba70910c4a4afbe82102c54c26816af6" Jan 30 17:29:35 crc kubenswrapper[4875]: E0130 17:29:35.826539 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"289218439f4dc907eddae46998a6f973ba70910c4a4afbe82102c54c26816af6\": container with ID starting with 289218439f4dc907eddae46998a6f973ba70910c4a4afbe82102c54c26816af6 not found: ID does not exist" containerID="289218439f4dc907eddae46998a6f973ba70910c4a4afbe82102c54c26816af6" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.826561 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"289218439f4dc907eddae46998a6f973ba70910c4a4afbe82102c54c26816af6"} err="failed to get container status \"289218439f4dc907eddae46998a6f973ba70910c4a4afbe82102c54c26816af6\": rpc error: code = NotFound desc = could not find container \"289218439f4dc907eddae46998a6f973ba70910c4a4afbe82102c54c26816af6\": container with ID starting with 289218439f4dc907eddae46998a6f973ba70910c4a4afbe82102c54c26816af6 not found: ID does not exist" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.826575 4875 scope.go:117] "RemoveContainer" containerID="5ea465c5a95c127d86e40c2ae4a590c6cfe38766e828127f7eb6c7ef453b2934" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.830241 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.848326 4875 scope.go:117] "RemoveContainer" containerID="5ea465c5a95c127d86e40c2ae4a590c6cfe38766e828127f7eb6c7ef453b2934" Jan 30 17:29:35 crc kubenswrapper[4875]: E0130 17:29:35.848733 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ea465c5a95c127d86e40c2ae4a590c6cfe38766e828127f7eb6c7ef453b2934\": container with ID starting with 5ea465c5a95c127d86e40c2ae4a590c6cfe38766e828127f7eb6c7ef453b2934 not found: ID does not exist" containerID="5ea465c5a95c127d86e40c2ae4a590c6cfe38766e828127f7eb6c7ef453b2934" Jan 30 17:29:35 crc kubenswrapper[4875]: I0130 17:29:35.848769 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ea465c5a95c127d86e40c2ae4a590c6cfe38766e828127f7eb6c7ef453b2934"} err="failed to get container status \"5ea465c5a95c127d86e40c2ae4a590c6cfe38766e828127f7eb6c7ef453b2934\": rpc error: code = NotFound desc = could not find container \"5ea465c5a95c127d86e40c2ae4a590c6cfe38766e828127f7eb6c7ef453b2934\": container with ID starting with 5ea465c5a95c127d86e40c2ae4a590c6cfe38766e828127f7eb6c7ef453b2934 not found: ID does not exist" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.041598 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell094f9-account-delete-zd7mh" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.101679 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70107d0d-90c4-4114-8328-6a12baf5c7cd-operator-scripts\") pod \"70107d0d-90c4-4114-8328-6a12baf5c7cd\" (UID: \"70107d0d-90c4-4114-8328-6a12baf5c7cd\") " Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.102201 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70107d0d-90c4-4114-8328-6a12baf5c7cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70107d0d-90c4-4114-8328-6a12baf5c7cd" (UID: "70107d0d-90c4-4114-8328-6a12baf5c7cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.102352 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw2cj\" (UniqueName: \"kubernetes.io/projected/70107d0d-90c4-4114-8328-6a12baf5c7cd-kube-api-access-cw2cj\") pod \"70107d0d-90c4-4114-8328-6a12baf5c7cd\" (UID: \"70107d0d-90c4-4114-8328-6a12baf5c7cd\") " Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.103258 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70107d0d-90c4-4114-8328-6a12baf5c7cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.105636 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70107d0d-90c4-4114-8328-6a12baf5c7cd-kube-api-access-cw2cj" (OuterVolumeSpecName: "kube-api-access-cw2cj") pod "70107d0d-90c4-4114-8328-6a12baf5c7cd" (UID: "70107d0d-90c4-4114-8328-6a12baf5c7cd"). InnerVolumeSpecName "kube-api-access-cw2cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.168366 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="017099d9-455f-4e89-b38a-1a5400faec32" path="/var/lib/kubelet/pods/017099d9-455f-4e89-b38a-1a5400faec32/volumes" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.169153 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08294a73-b9f7-404e-b0fa-7d5b85501c39" path="/var/lib/kubelet/pods/08294a73-b9f7-404e-b0fa-7d5b85501c39/volumes" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.171504 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="455921c1-b5b6-42e8-b050-920a49161c06" path="/var/lib/kubelet/pods/455921c1-b5b6-42e8-b050-920a49161c06/volumes" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.172464 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a438b47-7d96-403c-ac75-74677da11940" path="/var/lib/kubelet/pods/8a438b47-7d96-403c-ac75-74677da11940/volumes" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.205375 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw2cj\" (UniqueName: \"kubernetes.io/projected/70107d0d-90c4-4114-8328-6a12baf5c7cd-kube-api-access-cw2cj\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.210155 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell188b2-account-delete-k9vrw" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.214974 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi7118-account-delete-zn4dq" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.306621 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebc78f90-5aac-40d5-aca2-af2d7921f98e-operator-scripts\") pod \"ebc78f90-5aac-40d5-aca2-af2d7921f98e\" (UID: \"ebc78f90-5aac-40d5-aca2-af2d7921f98e\") " Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.306749 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbf4b428-7d59-448f-a818-c0bc51fbd99e-operator-scripts\") pod \"bbf4b428-7d59-448f-a818-c0bc51fbd99e\" (UID: \"bbf4b428-7d59-448f-a818-c0bc51fbd99e\") " Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.306857 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jspnv\" (UniqueName: \"kubernetes.io/projected/ebc78f90-5aac-40d5-aca2-af2d7921f98e-kube-api-access-jspnv\") pod \"ebc78f90-5aac-40d5-aca2-af2d7921f98e\" (UID: \"ebc78f90-5aac-40d5-aca2-af2d7921f98e\") " Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.306893 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2522l\" (UniqueName: \"kubernetes.io/projected/bbf4b428-7d59-448f-a818-c0bc51fbd99e-kube-api-access-2522l\") pod \"bbf4b428-7d59-448f-a818-c0bc51fbd99e\" (UID: \"bbf4b428-7d59-448f-a818-c0bc51fbd99e\") " Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.307167 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc78f90-5aac-40d5-aca2-af2d7921f98e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ebc78f90-5aac-40d5-aca2-af2d7921f98e" (UID: "ebc78f90-5aac-40d5-aca2-af2d7921f98e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.307765 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbf4b428-7d59-448f-a818-c0bc51fbd99e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bbf4b428-7d59-448f-a818-c0bc51fbd99e" (UID: "bbf4b428-7d59-448f-a818-c0bc51fbd99e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.310752 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc78f90-5aac-40d5-aca2-af2d7921f98e-kube-api-access-jspnv" (OuterVolumeSpecName: "kube-api-access-jspnv") pod "ebc78f90-5aac-40d5-aca2-af2d7921f98e" (UID: "ebc78f90-5aac-40d5-aca2-af2d7921f98e"). InnerVolumeSpecName "kube-api-access-jspnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.310808 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbf4b428-7d59-448f-a818-c0bc51fbd99e-kube-api-access-2522l" (OuterVolumeSpecName: "kube-api-access-2522l") pod "bbf4b428-7d59-448f-a818-c0bc51fbd99e" (UID: "bbf4b428-7d59-448f-a818-c0bc51fbd99e"). InnerVolumeSpecName "kube-api-access-2522l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.409225 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebc78f90-5aac-40d5-aca2-af2d7921f98e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.409261 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bbf4b428-7d59-448f-a818-c0bc51fbd99e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.409274 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jspnv\" (UniqueName: \"kubernetes.io/projected/ebc78f90-5aac-40d5-aca2-af2d7921f98e-kube-api-access-jspnv\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.409288 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2522l\" (UniqueName: \"kubernetes.io/projected/bbf4b428-7d59-448f-a818-c0bc51fbd99e-kube-api-access-2522l\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.788280 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell094f9-account-delete-zd7mh" event={"ID":"70107d0d-90c4-4114-8328-6a12baf5c7cd","Type":"ContainerDied","Data":"06ed4a15f1c17cffc12ab8fe5a87eef0af44683384466a1af997f4da40f70f4a"} Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.788646 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06ed4a15f1c17cffc12ab8fe5a87eef0af44683384466a1af997f4da40f70f4a" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.788292 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell094f9-account-delete-zd7mh" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.790009 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi7118-account-delete-zn4dq" event={"ID":"ebc78f90-5aac-40d5-aca2-af2d7921f98e","Type":"ContainerDied","Data":"1dc498bc0c8521b86ccf16814c8b0505a2eeff9a67f691b061c641d8e0345d2a"} Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.790034 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi7118-account-delete-zn4dq" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.790036 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dc498bc0c8521b86ccf16814c8b0505a2eeff9a67f691b061c641d8e0345d2a" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.792386 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell188b2-account-delete-k9vrw" event={"ID":"bbf4b428-7d59-448f-a818-c0bc51fbd99e","Type":"ContainerDied","Data":"a04231aacedb522a7db7ff23439a26245ecdbe8bddb1ddb38f8e45eee1328482"} Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.792411 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a04231aacedb522a7db7ff23439a26245ecdbe8bddb1ddb38f8e45eee1328482" Jan 30 17:29:36 crc kubenswrapper[4875]: I0130 17:29:36.792447 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell188b2-account-delete-k9vrw" Jan 30 17:29:37 crc kubenswrapper[4875]: I0130 17:29:37.920742 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-c6lw8"] Jan 30 17:29:37 crc kubenswrapper[4875]: I0130 17:29:37.929478 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-c6lw8"] Jan 30 17:29:37 crc kubenswrapper[4875]: I0130 17:29:37.948192 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t"] Jan 30 17:29:37 crc kubenswrapper[4875]: I0130 17:29:37.955701 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell094f9-account-delete-zd7mh"] Jan 30 17:29:37 crc kubenswrapper[4875]: I0130 17:29:37.961678 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell094f9-account-delete-zd7mh"] Jan 30 17:29:37 crc kubenswrapper[4875]: I0130 17:29:37.967878 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-94f9-account-create-update-7n96t"] Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.024494 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-4df7t"] Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.030006 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-4df7t"] Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.035246 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-7118-account-create-update-gqwx9"] Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.043297 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-7118-account-create-update-gqwx9"] Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.051618 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapi7118-account-delete-zn4dq"] Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.057033 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapi7118-account-delete-zn4dq"] Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.114183 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-r6npw"] Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.120817 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-r6npw"] Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.134908 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m"] Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.143925 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26565c7a-594f-4ca2-b6c9-ea0527c04619" path="/var/lib/kubelet/pods/26565c7a-594f-4ca2-b6c9-ea0527c04619/volumes" Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.144500 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d70a7be-3789-4619-9e33-7b2d249345bd" path="/var/lib/kubelet/pods/2d70a7be-3789-4619-9e33-7b2d249345bd/volumes" Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.145074 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f18684e-d712-4eee-ae0c-e2030de0676b" path="/var/lib/kubelet/pods/5f18684e-d712-4eee-ae0c-e2030de0676b/volumes" Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.145539 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70107d0d-90c4-4114-8328-6a12baf5c7cd" path="/var/lib/kubelet/pods/70107d0d-90c4-4114-8328-6a12baf5c7cd/volumes" Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.146603 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e63a5d-4ddb-4c97-8204-bdd6342418bd" path="/var/lib/kubelet/pods/d8e63a5d-4ddb-4c97-8204-bdd6342418bd/volumes" Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.147152 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab1bdf5-00ae-422b-9edc-f663f448c46b" path="/var/lib/kubelet/pods/dab1bdf5-00ae-422b-9edc-f663f448c46b/volumes" Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.147727 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebc78f90-5aac-40d5-aca2-af2d7921f98e" path="/var/lib/kubelet/pods/ebc78f90-5aac-40d5-aca2-af2d7921f98e/volumes" Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.148607 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell188b2-account-delete-k9vrw"] Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.150553 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-88b2-account-create-update-m4t6m"] Jan 30 17:29:38 crc kubenswrapper[4875]: I0130 17:29:38.156750 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell188b2-account-delete-k9vrw"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080120 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-nqjhm"] Jan 30 17:29:40 crc kubenswrapper[4875]: E0130 17:29:40.080689 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbf4b428-7d59-448f-a818-c0bc51fbd99e" containerName="mariadb-account-delete" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080700 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbf4b428-7d59-448f-a818-c0bc51fbd99e" containerName="mariadb-account-delete" Jan 30 17:29:40 crc kubenswrapper[4875]: E0130 17:29:40.080712 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc78f90-5aac-40d5-aca2-af2d7921f98e" containerName="mariadb-account-delete" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080720 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc78f90-5aac-40d5-aca2-af2d7921f98e" containerName="mariadb-account-delete" Jan 30 17:29:40 crc kubenswrapper[4875]: E0130 17:29:40.080736 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a438b47-7d96-403c-ac75-74677da11940" containerName="nova-kuttl-metadata-metadata" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080742 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a438b47-7d96-403c-ac75-74677da11940" containerName="nova-kuttl-metadata-metadata" Jan 30 17:29:40 crc kubenswrapper[4875]: E0130 17:29:40.080754 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a438b47-7d96-403c-ac75-74677da11940" containerName="nova-kuttl-metadata-log" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080760 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a438b47-7d96-403c-ac75-74677da11940" containerName="nova-kuttl-metadata-log" Jan 30 17:29:40 crc kubenswrapper[4875]: E0130 17:29:40.080770 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="455921c1-b5b6-42e8-b050-920a49161c06" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080775 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="455921c1-b5b6-42e8-b050-920a49161c06" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 30 17:29:40 crc kubenswrapper[4875]: E0130 17:29:40.080785 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08294a73-b9f7-404e-b0fa-7d5b85501c39" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080790 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="08294a73-b9f7-404e-b0fa-7d5b85501c39" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:29:40 crc kubenswrapper[4875]: E0130 17:29:40.080804 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70107d0d-90c4-4114-8328-6a12baf5c7cd" containerName="mariadb-account-delete" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080810 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="70107d0d-90c4-4114-8328-6a12baf5c7cd" containerName="mariadb-account-delete" Jan 30 17:29:40 crc kubenswrapper[4875]: E0130 17:29:40.080819 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017099d9-455f-4e89-b38a-1a5400faec32" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080825 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="017099d9-455f-4e89-b38a-1a5400faec32" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080946 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbf4b428-7d59-448f-a818-c0bc51fbd99e" containerName="mariadb-account-delete" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080956 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a438b47-7d96-403c-ac75-74677da11940" containerName="nova-kuttl-metadata-metadata" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080964 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="08294a73-b9f7-404e-b0fa-7d5b85501c39" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080972 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a438b47-7d96-403c-ac75-74677da11940" containerName="nova-kuttl-metadata-log" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080980 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="70107d0d-90c4-4114-8328-6a12baf5c7cd" containerName="mariadb-account-delete" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080988 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="455921c1-b5b6-42e8-b050-920a49161c06" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.080998 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc78f90-5aac-40d5-aca2-af2d7921f98e" containerName="mariadb-account-delete" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.081007 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="017099d9-455f-4e89-b38a-1a5400faec32" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.081485 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-nqjhm" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.089868 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-nqjhm"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.144369 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="387ad041-1225-4993-a8bc-7d63648e123a" path="/var/lib/kubelet/pods/387ad041-1225-4993-a8bc-7d63648e123a/volumes" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.145112 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbf4b428-7d59-448f-a818-c0bc51fbd99e" path="/var/lib/kubelet/pods/bbf4b428-7d59-448f-a818-c0bc51fbd99e/volumes" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.161655 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ffccfaf-adf6-49e9-a626-b81376554127-operator-scripts\") pod \"nova-api-db-create-nqjhm\" (UID: \"0ffccfaf-adf6-49e9-a626-b81376554127\") " pod="nova-kuttl-default/nova-api-db-create-nqjhm" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.161724 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kbrg\" (UniqueName: \"kubernetes.io/projected/0ffccfaf-adf6-49e9-a626-b81376554127-kube-api-access-5kbrg\") pod \"nova-api-db-create-nqjhm\" (UID: \"0ffccfaf-adf6-49e9-a626-b81376554127\") " pod="nova-kuttl-default/nova-api-db-create-nqjhm" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.194226 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-8dsds"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.195203 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-8dsds" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.205215 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-8dsds"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.263480 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljgd9\" (UniqueName: \"kubernetes.io/projected/e5eadec5-b07e-4825-ad38-c41990e4ad98-kube-api-access-ljgd9\") pod \"nova-cell0-db-create-8dsds\" (UID: \"e5eadec5-b07e-4825-ad38-c41990e4ad98\") " pod="nova-kuttl-default/nova-cell0-db-create-8dsds" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.263577 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5eadec5-b07e-4825-ad38-c41990e4ad98-operator-scripts\") pod \"nova-cell0-db-create-8dsds\" (UID: \"e5eadec5-b07e-4825-ad38-c41990e4ad98\") " pod="nova-kuttl-default/nova-cell0-db-create-8dsds" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.263740 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ffccfaf-adf6-49e9-a626-b81376554127-operator-scripts\") pod \"nova-api-db-create-nqjhm\" (UID: \"0ffccfaf-adf6-49e9-a626-b81376554127\") " pod="nova-kuttl-default/nova-api-db-create-nqjhm" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.263805 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kbrg\" (UniqueName: \"kubernetes.io/projected/0ffccfaf-adf6-49e9-a626-b81376554127-kube-api-access-5kbrg\") pod \"nova-api-db-create-nqjhm\" (UID: \"0ffccfaf-adf6-49e9-a626-b81376554127\") " pod="nova-kuttl-default/nova-api-db-create-nqjhm" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.264448 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ffccfaf-adf6-49e9-a626-b81376554127-operator-scripts\") pod \"nova-api-db-create-nqjhm\" (UID: \"0ffccfaf-adf6-49e9-a626-b81376554127\") " pod="nova-kuttl-default/nova-api-db-create-nqjhm" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.289336 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.291106 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.294149 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.297232 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-qzfg8"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.298615 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-qzfg8" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.307203 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kbrg\" (UniqueName: \"kubernetes.io/projected/0ffccfaf-adf6-49e9-a626-b81376554127-kube-api-access-5kbrg\") pod \"nova-api-db-create-nqjhm\" (UID: \"0ffccfaf-adf6-49e9-a626-b81376554127\") " pod="nova-kuttl-default/nova-api-db-create-nqjhm" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.309947 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-qzfg8"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.318451 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.364983 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06535175-24df-4a19-8892-9936345a6338-operator-scripts\") pod \"nova-cell1-db-create-qzfg8\" (UID: \"06535175-24df-4a19-8892-9936345a6338\") " pod="nova-kuttl-default/nova-cell1-db-create-qzfg8" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.365052 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljgd9\" (UniqueName: \"kubernetes.io/projected/e5eadec5-b07e-4825-ad38-c41990e4ad98-kube-api-access-ljgd9\") pod \"nova-cell0-db-create-8dsds\" (UID: \"e5eadec5-b07e-4825-ad38-c41990e4ad98\") " pod="nova-kuttl-default/nova-cell0-db-create-8dsds" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.365088 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwzjn\" (UniqueName: \"kubernetes.io/projected/06535175-24df-4a19-8892-9936345a6338-kube-api-access-bwzjn\") pod \"nova-cell1-db-create-qzfg8\" (UID: \"06535175-24df-4a19-8892-9936345a6338\") " pod="nova-kuttl-default/nova-cell1-db-create-qzfg8" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.365133 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5eadec5-b07e-4825-ad38-c41990e4ad98-operator-scripts\") pod \"nova-cell0-db-create-8dsds\" (UID: \"e5eadec5-b07e-4825-ad38-c41990e4ad98\") " pod="nova-kuttl-default/nova-cell0-db-create-8dsds" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.365166 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb44k\" (UniqueName: \"kubernetes.io/projected/95fc551d-b330-4816-9166-fa1e6f145e90-kube-api-access-cb44k\") pod \"nova-api-c51e-account-create-update-gcjcc\" (UID: \"95fc551d-b330-4816-9166-fa1e6f145e90\") " pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.365372 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fc551d-b330-4816-9166-fa1e6f145e90-operator-scripts\") pod \"nova-api-c51e-account-create-update-gcjcc\" (UID: \"95fc551d-b330-4816-9166-fa1e6f145e90\") " pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.365960 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5eadec5-b07e-4825-ad38-c41990e4ad98-operator-scripts\") pod \"nova-cell0-db-create-8dsds\" (UID: \"e5eadec5-b07e-4825-ad38-c41990e4ad98\") " pod="nova-kuttl-default/nova-cell0-db-create-8dsds" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.392227 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljgd9\" (UniqueName: \"kubernetes.io/projected/e5eadec5-b07e-4825-ad38-c41990e4ad98-kube-api-access-ljgd9\") pod \"nova-cell0-db-create-8dsds\" (UID: \"e5eadec5-b07e-4825-ad38-c41990e4ad98\") " pod="nova-kuttl-default/nova-cell0-db-create-8dsds" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.397262 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-nqjhm" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.466916 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06535175-24df-4a19-8892-9936345a6338-operator-scripts\") pod \"nova-cell1-db-create-qzfg8\" (UID: \"06535175-24df-4a19-8892-9936345a6338\") " pod="nova-kuttl-default/nova-cell1-db-create-qzfg8" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.467223 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwzjn\" (UniqueName: \"kubernetes.io/projected/06535175-24df-4a19-8892-9936345a6338-kube-api-access-bwzjn\") pod \"nova-cell1-db-create-qzfg8\" (UID: \"06535175-24df-4a19-8892-9936345a6338\") " pod="nova-kuttl-default/nova-cell1-db-create-qzfg8" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.467261 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb44k\" (UniqueName: \"kubernetes.io/projected/95fc551d-b330-4816-9166-fa1e6f145e90-kube-api-access-cb44k\") pod \"nova-api-c51e-account-create-update-gcjcc\" (UID: \"95fc551d-b330-4816-9166-fa1e6f145e90\") " pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.467305 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fc551d-b330-4816-9166-fa1e6f145e90-operator-scripts\") pod \"nova-api-c51e-account-create-update-gcjcc\" (UID: \"95fc551d-b330-4816-9166-fa1e6f145e90\") " pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.467757 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06535175-24df-4a19-8892-9936345a6338-operator-scripts\") pod \"nova-cell1-db-create-qzfg8\" (UID: \"06535175-24df-4a19-8892-9936345a6338\") " pod="nova-kuttl-default/nova-cell1-db-create-qzfg8" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.467962 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fc551d-b330-4816-9166-fa1e6f145e90-operator-scripts\") pod \"nova-api-c51e-account-create-update-gcjcc\" (UID: \"95fc551d-b330-4816-9166-fa1e6f145e90\") " pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.494185 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.495383 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.499462 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb44k\" (UniqueName: \"kubernetes.io/projected/95fc551d-b330-4816-9166-fa1e6f145e90-kube-api-access-cb44k\") pod \"nova-api-c51e-account-create-update-gcjcc\" (UID: \"95fc551d-b330-4816-9166-fa1e6f145e90\") " pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.499813 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.503086 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwzjn\" (UniqueName: \"kubernetes.io/projected/06535175-24df-4a19-8892-9936345a6338-kube-api-access-bwzjn\") pod \"nova-cell1-db-create-qzfg8\" (UID: \"06535175-24df-4a19-8892-9936345a6338\") " pod="nova-kuttl-default/nova-cell1-db-create-qzfg8" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.507459 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.520988 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-8dsds" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.568987 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8wvs\" (UniqueName: \"kubernetes.io/projected/bb1a954f-6cce-4ab8-b878-de0c48e9a80d-kube-api-access-x8wvs\") pod \"nova-cell0-b6ba-account-create-update-5tfhn\" (UID: \"bb1a954f-6cce-4ab8-b878-de0c48e9a80d\") " pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.570356 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1a954f-6cce-4ab8-b878-de0c48e9a80d-operator-scripts\") pod \"nova-cell0-b6ba-account-create-update-5tfhn\" (UID: \"bb1a954f-6cce-4ab8-b878-de0c48e9a80d\") " pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.655016 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.662491 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-qzfg8" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.673285 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8wvs\" (UniqueName: \"kubernetes.io/projected/bb1a954f-6cce-4ab8-b878-de0c48e9a80d-kube-api-access-x8wvs\") pod \"nova-cell0-b6ba-account-create-update-5tfhn\" (UID: \"bb1a954f-6cce-4ab8-b878-de0c48e9a80d\") " pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.673702 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1a954f-6cce-4ab8-b878-de0c48e9a80d-operator-scripts\") pod \"nova-cell0-b6ba-account-create-update-5tfhn\" (UID: \"bb1a954f-6cce-4ab8-b878-de0c48e9a80d\") " pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.674705 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1a954f-6cce-4ab8-b878-de0c48e9a80d-operator-scripts\") pod \"nova-cell0-b6ba-account-create-update-5tfhn\" (UID: \"bb1a954f-6cce-4ab8-b878-de0c48e9a80d\") " pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.692971 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.694187 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.697059 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.699735 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8wvs\" (UniqueName: \"kubernetes.io/projected/bb1a954f-6cce-4ab8-b878-de0c48e9a80d-kube-api-access-x8wvs\") pod \"nova-cell0-b6ba-account-create-update-5tfhn\" (UID: \"bb1a954f-6cce-4ab8-b878-de0c48e9a80d\") " pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.703394 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.776974 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1e3597d-60b2-4556-9cf0-994b868f6fa2-operator-scripts\") pod \"nova-cell1-717a-account-create-update-xn9fx\" (UID: \"b1e3597d-60b2-4556-9cf0-994b868f6fa2\") " pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.777281 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4wqv\" (UniqueName: \"kubernetes.io/projected/b1e3597d-60b2-4556-9cf0-994b868f6fa2-kube-api-access-m4wqv\") pod \"nova-cell1-717a-account-create-update-xn9fx\" (UID: \"b1e3597d-60b2-4556-9cf0-994b868f6fa2\") " pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.825756 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-8dsds"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.848919 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.882524 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1e3597d-60b2-4556-9cf0-994b868f6fa2-operator-scripts\") pod \"nova-cell1-717a-account-create-update-xn9fx\" (UID: \"b1e3597d-60b2-4556-9cf0-994b868f6fa2\") " pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.882610 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4wqv\" (UniqueName: \"kubernetes.io/projected/b1e3597d-60b2-4556-9cf0-994b868f6fa2-kube-api-access-m4wqv\") pod \"nova-cell1-717a-account-create-update-xn9fx\" (UID: \"b1e3597d-60b2-4556-9cf0-994b868f6fa2\") " pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.884447 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1e3597d-60b2-4556-9cf0-994b868f6fa2-operator-scripts\") pod \"nova-cell1-717a-account-create-update-xn9fx\" (UID: \"b1e3597d-60b2-4556-9cf0-994b868f6fa2\") " pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.908056 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-nqjhm"] Jan 30 17:29:40 crc kubenswrapper[4875]: I0130 17:29:40.908295 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4wqv\" (UniqueName: \"kubernetes.io/projected/b1e3597d-60b2-4556-9cf0-994b868f6fa2-kube-api-access-m4wqv\") pod \"nova-cell1-717a-account-create-update-xn9fx\" (UID: \"b1e3597d-60b2-4556-9cf0-994b868f6fa2\") " pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" Jan 30 17:29:40 crc kubenswrapper[4875]: W0130 17:29:40.916661 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ffccfaf_adf6_49e9_a626_b81376554127.slice/crio-52d692089ac2665777ab46ff209df5ca303b9949c6b2f82c8342d9f40ebdce6d WatchSource:0}: Error finding container 52d692089ac2665777ab46ff209df5ca303b9949c6b2f82c8342d9f40ebdce6d: Status 404 returned error can't find the container with id 52d692089ac2665777ab46ff209df5ca303b9949c6b2f82c8342d9f40ebdce6d Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.016271 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.162572 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc"] Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.268269 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-qzfg8"] Jan 30 17:29:41 crc kubenswrapper[4875]: W0130 17:29:41.357931 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06535175_24df_4a19_8892_9936345a6338.slice/crio-098f77abc84c4554bc8ecd710cfb572f29caf7bcc1dc7bcbf6d9c840f19f49bb WatchSource:0}: Error finding container 098f77abc84c4554bc8ecd710cfb572f29caf7bcc1dc7bcbf6d9c840f19f49bb: Status 404 returned error can't find the container with id 098f77abc84c4554bc8ecd710cfb572f29caf7bcc1dc7bcbf6d9c840f19f49bb Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.428466 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn"] Jan 30 17:29:41 crc kubenswrapper[4875]: W0130 17:29:41.436157 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb1a954f_6cce_4ab8_b878_de0c48e9a80d.slice/crio-61477d83a515018fe77e5305fc451c0230ea6f724cb828104bb7766c4b1b4592 WatchSource:0}: Error finding container 61477d83a515018fe77e5305fc451c0230ea6f724cb828104bb7766c4b1b4592: Status 404 returned error can't find the container with id 61477d83a515018fe77e5305fc451c0230ea6f724cb828104bb7766c4b1b4592 Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.510060 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx"] Jan 30 17:29:41 crc kubenswrapper[4875]: W0130 17:29:41.521979 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1e3597d_60b2_4556_9cf0_994b868f6fa2.slice/crio-aea36b208d494fe815c4ac540c13c5cb036e512a9a9050d811322c3dd6fb4f28 WatchSource:0}: Error finding container aea36b208d494fe815c4ac540c13c5cb036e512a9a9050d811322c3dd6fb4f28: Status 404 returned error can't find the container with id aea36b208d494fe815c4ac540c13c5cb036e512a9a9050d811322c3dd6fb4f28 Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.837413 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" event={"ID":"95fc551d-b330-4816-9166-fa1e6f145e90","Type":"ContainerStarted","Data":"390a3149f136c0c2a10de2c4276fe05eb29f07278aa8bce6e169e7d2e9928733"} Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.837455 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" event={"ID":"95fc551d-b330-4816-9166-fa1e6f145e90","Type":"ContainerStarted","Data":"49ec4f4285f85775d8fa3680a7df978410c8382d2f23e8554958d8fa84bd9b64"} Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.839749 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" event={"ID":"b1e3597d-60b2-4556-9cf0-994b868f6fa2","Type":"ContainerStarted","Data":"c084402e24b3ca5c167a0a8e077d2b1f367e48ebe16a30acf8dfd1ea7597d479"} Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.839775 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" event={"ID":"b1e3597d-60b2-4556-9cf0-994b868f6fa2","Type":"ContainerStarted","Data":"aea36b208d494fe815c4ac540c13c5cb036e512a9a9050d811322c3dd6fb4f28"} Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.842595 4875 generic.go:334] "Generic (PLEG): container finished" podID="06535175-24df-4a19-8892-9936345a6338" containerID="8a618579c1bc5c181ddc634f841afbceb7da691f7052ed3508f09c51a7ac8c14" exitCode=0 Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.842644 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-qzfg8" event={"ID":"06535175-24df-4a19-8892-9936345a6338","Type":"ContainerDied","Data":"8a618579c1bc5c181ddc634f841afbceb7da691f7052ed3508f09c51a7ac8c14"} Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.842703 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-qzfg8" event={"ID":"06535175-24df-4a19-8892-9936345a6338","Type":"ContainerStarted","Data":"098f77abc84c4554bc8ecd710cfb572f29caf7bcc1dc7bcbf6d9c840f19f49bb"} Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.843974 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" event={"ID":"bb1a954f-6cce-4ab8-b878-de0c48e9a80d","Type":"ContainerStarted","Data":"b2f06f7d9a5c74971f735905abe0a8db492f48583eafd4afba815679681db8eb"} Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.844018 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" event={"ID":"bb1a954f-6cce-4ab8-b878-de0c48e9a80d","Type":"ContainerStarted","Data":"61477d83a515018fe77e5305fc451c0230ea6f724cb828104bb7766c4b1b4592"} Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.846103 4875 generic.go:334] "Generic (PLEG): container finished" podID="0ffccfaf-adf6-49e9-a626-b81376554127" containerID="1c83ae29f08450fe361b967cc3c6634c1275f8b1383d44fc8ebad6147a18b38f" exitCode=0 Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.846147 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-nqjhm" event={"ID":"0ffccfaf-adf6-49e9-a626-b81376554127","Type":"ContainerDied","Data":"1c83ae29f08450fe361b967cc3c6634c1275f8b1383d44fc8ebad6147a18b38f"} Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.846163 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-nqjhm" event={"ID":"0ffccfaf-adf6-49e9-a626-b81376554127","Type":"ContainerStarted","Data":"52d692089ac2665777ab46ff209df5ca303b9949c6b2f82c8342d9f40ebdce6d"} Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.847969 4875 generic.go:334] "Generic (PLEG): container finished" podID="e5eadec5-b07e-4825-ad38-c41990e4ad98" containerID="997a9a921c3442ae23e68a567b4c8b7589fd7a91c44f310e97c0cdfa685665ca" exitCode=0 Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.848002 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-8dsds" event={"ID":"e5eadec5-b07e-4825-ad38-c41990e4ad98","Type":"ContainerDied","Data":"997a9a921c3442ae23e68a567b4c8b7589fd7a91c44f310e97c0cdfa685665ca"} Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.848022 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-8dsds" event={"ID":"e5eadec5-b07e-4825-ad38-c41990e4ad98","Type":"ContainerStarted","Data":"5b3cabb87cacd023e5ecfe401f11b6d919d689515762b9b2540f90e160cb5078"} Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.885568 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" podStartSLOduration=1.885543027 podStartE2EDuration="1.885543027s" podCreationTimestamp="2026-01-30 17:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:29:41.877963926 +0000 UTC m=+1992.425327309" watchObservedRunningTime="2026-01-30 17:29:41.885543027 +0000 UTC m=+1992.432906420" Jan 30 17:29:41 crc kubenswrapper[4875]: I0130 17:29:41.921569 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" podStartSLOduration=1.921552286 podStartE2EDuration="1.921552286s" podCreationTimestamp="2026-01-30 17:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:29:41.914995397 +0000 UTC m=+1992.462358780" watchObservedRunningTime="2026-01-30 17:29:41.921552286 +0000 UTC m=+1992.468915669" Jan 30 17:29:42 crc kubenswrapper[4875]: I0130 17:29:42.859256 4875 generic.go:334] "Generic (PLEG): container finished" podID="95fc551d-b330-4816-9166-fa1e6f145e90" containerID="390a3149f136c0c2a10de2c4276fe05eb29f07278aa8bce6e169e7d2e9928733" exitCode=0 Jan 30 17:29:42 crc kubenswrapper[4875]: I0130 17:29:42.859360 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" event={"ID":"95fc551d-b330-4816-9166-fa1e6f145e90","Type":"ContainerDied","Data":"390a3149f136c0c2a10de2c4276fe05eb29f07278aa8bce6e169e7d2e9928733"} Jan 30 17:29:42 crc kubenswrapper[4875]: I0130 17:29:42.861666 4875 generic.go:334] "Generic (PLEG): container finished" podID="b1e3597d-60b2-4556-9cf0-994b868f6fa2" containerID="c084402e24b3ca5c167a0a8e077d2b1f367e48ebe16a30acf8dfd1ea7597d479" exitCode=0 Jan 30 17:29:42 crc kubenswrapper[4875]: I0130 17:29:42.861794 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" event={"ID":"b1e3597d-60b2-4556-9cf0-994b868f6fa2","Type":"ContainerDied","Data":"c084402e24b3ca5c167a0a8e077d2b1f367e48ebe16a30acf8dfd1ea7597d479"} Jan 30 17:29:42 crc kubenswrapper[4875]: I0130 17:29:42.869338 4875 generic.go:334] "Generic (PLEG): container finished" podID="bb1a954f-6cce-4ab8-b878-de0c48e9a80d" containerID="b2f06f7d9a5c74971f735905abe0a8db492f48583eafd4afba815679681db8eb" exitCode=0 Jan 30 17:29:42 crc kubenswrapper[4875]: I0130 17:29:42.869595 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" event={"ID":"bb1a954f-6cce-4ab8-b878-de0c48e9a80d","Type":"ContainerDied","Data":"b2f06f7d9a5c74971f735905abe0a8db492f48583eafd4afba815679681db8eb"} Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.356171 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.435566 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb44k\" (UniqueName: \"kubernetes.io/projected/95fc551d-b330-4816-9166-fa1e6f145e90-kube-api-access-cb44k\") pod \"95fc551d-b330-4816-9166-fa1e6f145e90\" (UID: \"95fc551d-b330-4816-9166-fa1e6f145e90\") " Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.435639 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fc551d-b330-4816-9166-fa1e6f145e90-operator-scripts\") pod \"95fc551d-b330-4816-9166-fa1e6f145e90\" (UID: \"95fc551d-b330-4816-9166-fa1e6f145e90\") " Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.436548 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fc551d-b330-4816-9166-fa1e6f145e90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95fc551d-b330-4816-9166-fa1e6f145e90" (UID: "95fc551d-b330-4816-9166-fa1e6f145e90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.449810 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fc551d-b330-4816-9166-fa1e6f145e90-kube-api-access-cb44k" (OuterVolumeSpecName: "kube-api-access-cb44k") pod "95fc551d-b330-4816-9166-fa1e6f145e90" (UID: "95fc551d-b330-4816-9166-fa1e6f145e90"). InnerVolumeSpecName "kube-api-access-cb44k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.534845 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-nqjhm" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.537841 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb44k\" (UniqueName: \"kubernetes.io/projected/95fc551d-b330-4816-9166-fa1e6f145e90-kube-api-access-cb44k\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.537872 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95fc551d-b330-4816-9166-fa1e6f145e90-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.541919 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-8dsds" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.551770 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-qzfg8" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.638980 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ffccfaf-adf6-49e9-a626-b81376554127-operator-scripts\") pod \"0ffccfaf-adf6-49e9-a626-b81376554127\" (UID: \"0ffccfaf-adf6-49e9-a626-b81376554127\") " Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.639074 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06535175-24df-4a19-8892-9936345a6338-operator-scripts\") pod \"06535175-24df-4a19-8892-9936345a6338\" (UID: \"06535175-24df-4a19-8892-9936345a6338\") " Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.639116 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwzjn\" (UniqueName: \"kubernetes.io/projected/06535175-24df-4a19-8892-9936345a6338-kube-api-access-bwzjn\") pod \"06535175-24df-4a19-8892-9936345a6338\" (UID: \"06535175-24df-4a19-8892-9936345a6338\") " Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.639200 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kbrg\" (UniqueName: \"kubernetes.io/projected/0ffccfaf-adf6-49e9-a626-b81376554127-kube-api-access-5kbrg\") pod \"0ffccfaf-adf6-49e9-a626-b81376554127\" (UID: \"0ffccfaf-adf6-49e9-a626-b81376554127\") " Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.639227 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5eadec5-b07e-4825-ad38-c41990e4ad98-operator-scripts\") pod \"e5eadec5-b07e-4825-ad38-c41990e4ad98\" (UID: \"e5eadec5-b07e-4825-ad38-c41990e4ad98\") " Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.639266 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljgd9\" (UniqueName: \"kubernetes.io/projected/e5eadec5-b07e-4825-ad38-c41990e4ad98-kube-api-access-ljgd9\") pod \"e5eadec5-b07e-4825-ad38-c41990e4ad98\" (UID: \"e5eadec5-b07e-4825-ad38-c41990e4ad98\") " Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.640310 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06535175-24df-4a19-8892-9936345a6338-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06535175-24df-4a19-8892-9936345a6338" (UID: "06535175-24df-4a19-8892-9936345a6338"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.640469 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ffccfaf-adf6-49e9-a626-b81376554127-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ffccfaf-adf6-49e9-a626-b81376554127" (UID: "0ffccfaf-adf6-49e9-a626-b81376554127"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.640694 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5eadec5-b07e-4825-ad38-c41990e4ad98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5eadec5-b07e-4825-ad38-c41990e4ad98" (UID: "e5eadec5-b07e-4825-ad38-c41990e4ad98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.642310 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5eadec5-b07e-4825-ad38-c41990e4ad98-kube-api-access-ljgd9" (OuterVolumeSpecName: "kube-api-access-ljgd9") pod "e5eadec5-b07e-4825-ad38-c41990e4ad98" (UID: "e5eadec5-b07e-4825-ad38-c41990e4ad98"). InnerVolumeSpecName "kube-api-access-ljgd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.657343 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06535175-24df-4a19-8892-9936345a6338-kube-api-access-bwzjn" (OuterVolumeSpecName: "kube-api-access-bwzjn") pod "06535175-24df-4a19-8892-9936345a6338" (UID: "06535175-24df-4a19-8892-9936345a6338"). InnerVolumeSpecName "kube-api-access-bwzjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.657405 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ffccfaf-adf6-49e9-a626-b81376554127-kube-api-access-5kbrg" (OuterVolumeSpecName: "kube-api-access-5kbrg") pod "0ffccfaf-adf6-49e9-a626-b81376554127" (UID: "0ffccfaf-adf6-49e9-a626-b81376554127"). InnerVolumeSpecName "kube-api-access-5kbrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.741011 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwzjn\" (UniqueName: \"kubernetes.io/projected/06535175-24df-4a19-8892-9936345a6338-kube-api-access-bwzjn\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.741050 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kbrg\" (UniqueName: \"kubernetes.io/projected/0ffccfaf-adf6-49e9-a626-b81376554127-kube-api-access-5kbrg\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.741061 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5eadec5-b07e-4825-ad38-c41990e4ad98-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.741070 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljgd9\" (UniqueName: \"kubernetes.io/projected/e5eadec5-b07e-4825-ad38-c41990e4ad98-kube-api-access-ljgd9\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.741078 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ffccfaf-adf6-49e9-a626-b81376554127-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.741088 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06535175-24df-4a19-8892-9936345a6338-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.920217 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-qzfg8" event={"ID":"06535175-24df-4a19-8892-9936345a6338","Type":"ContainerDied","Data":"098f77abc84c4554bc8ecd710cfb572f29caf7bcc1dc7bcbf6d9c840f19f49bb"} Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.920270 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="098f77abc84c4554bc8ecd710cfb572f29caf7bcc1dc7bcbf6d9c840f19f49bb" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.920330 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-qzfg8" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.924193 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-nqjhm" event={"ID":"0ffccfaf-adf6-49e9-a626-b81376554127","Type":"ContainerDied","Data":"52d692089ac2665777ab46ff209df5ca303b9949c6b2f82c8342d9f40ebdce6d"} Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.924233 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52d692089ac2665777ab46ff209df5ca303b9949c6b2f82c8342d9f40ebdce6d" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.924297 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-nqjhm" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.933727 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-8dsds" event={"ID":"e5eadec5-b07e-4825-ad38-c41990e4ad98","Type":"ContainerDied","Data":"5b3cabb87cacd023e5ecfe401f11b6d919d689515762b9b2540f90e160cb5078"} Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.933987 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b3cabb87cacd023e5ecfe401f11b6d919d689515762b9b2540f90e160cb5078" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.933744 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-8dsds" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.938753 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" event={"ID":"95fc551d-b330-4816-9166-fa1e6f145e90","Type":"ContainerDied","Data":"49ec4f4285f85775d8fa3680a7df978410c8382d2f23e8554958d8fa84bd9b64"} Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.938808 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49ec4f4285f85775d8fa3680a7df978410c8382d2f23e8554958d8fa84bd9b64" Jan 30 17:29:43 crc kubenswrapper[4875]: I0130 17:29:43.938877 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.335493 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.341861 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.460152 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4wqv\" (UniqueName: \"kubernetes.io/projected/b1e3597d-60b2-4556-9cf0-994b868f6fa2-kube-api-access-m4wqv\") pod \"b1e3597d-60b2-4556-9cf0-994b868f6fa2\" (UID: \"b1e3597d-60b2-4556-9cf0-994b868f6fa2\") " Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.460283 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1a954f-6cce-4ab8-b878-de0c48e9a80d-operator-scripts\") pod \"bb1a954f-6cce-4ab8-b878-de0c48e9a80d\" (UID: \"bb1a954f-6cce-4ab8-b878-de0c48e9a80d\") " Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.460318 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8wvs\" (UniqueName: \"kubernetes.io/projected/bb1a954f-6cce-4ab8-b878-de0c48e9a80d-kube-api-access-x8wvs\") pod \"bb1a954f-6cce-4ab8-b878-de0c48e9a80d\" (UID: \"bb1a954f-6cce-4ab8-b878-de0c48e9a80d\") " Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.460424 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1e3597d-60b2-4556-9cf0-994b868f6fa2-operator-scripts\") pod \"b1e3597d-60b2-4556-9cf0-994b868f6fa2\" (UID: \"b1e3597d-60b2-4556-9cf0-994b868f6fa2\") " Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.460994 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1e3597d-60b2-4556-9cf0-994b868f6fa2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1e3597d-60b2-4556-9cf0-994b868f6fa2" (UID: "b1e3597d-60b2-4556-9cf0-994b868f6fa2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.461269 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb1a954f-6cce-4ab8-b878-de0c48e9a80d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb1a954f-6cce-4ab8-b878-de0c48e9a80d" (UID: "bb1a954f-6cce-4ab8-b878-de0c48e9a80d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.463755 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e3597d-60b2-4556-9cf0-994b868f6fa2-kube-api-access-m4wqv" (OuterVolumeSpecName: "kube-api-access-m4wqv") pod "b1e3597d-60b2-4556-9cf0-994b868f6fa2" (UID: "b1e3597d-60b2-4556-9cf0-994b868f6fa2"). InnerVolumeSpecName "kube-api-access-m4wqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.463773 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb1a954f-6cce-4ab8-b878-de0c48e9a80d-kube-api-access-x8wvs" (OuterVolumeSpecName: "kube-api-access-x8wvs") pod "bb1a954f-6cce-4ab8-b878-de0c48e9a80d" (UID: "bb1a954f-6cce-4ab8-b878-de0c48e9a80d"). InnerVolumeSpecName "kube-api-access-x8wvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.566691 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4wqv\" (UniqueName: \"kubernetes.io/projected/b1e3597d-60b2-4556-9cf0-994b868f6fa2-kube-api-access-m4wqv\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.566737 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb1a954f-6cce-4ab8-b878-de0c48e9a80d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.566752 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8wvs\" (UniqueName: \"kubernetes.io/projected/bb1a954f-6cce-4ab8-b878-de0c48e9a80d-kube-api-access-x8wvs\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.566764 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1e3597d-60b2-4556-9cf0-994b868f6fa2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.948024 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" event={"ID":"b1e3597d-60b2-4556-9cf0-994b868f6fa2","Type":"ContainerDied","Data":"aea36b208d494fe815c4ac540c13c5cb036e512a9a9050d811322c3dd6fb4f28"} Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.948328 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea36b208d494fe815c4ac540c13c5cb036e512a9a9050d811322c3dd6fb4f28" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.948379 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.951668 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" event={"ID":"bb1a954f-6cce-4ab8-b878-de0c48e9a80d","Type":"ContainerDied","Data":"61477d83a515018fe77e5305fc451c0230ea6f724cb828104bb7766c4b1b4592"} Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.951708 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61477d83a515018fe77e5305fc451c0230ea6f724cb828104bb7766c4b1b4592" Jan 30 17:29:44 crc kubenswrapper[4875]: I0130 17:29:44.951731 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.729096 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc"] Jan 30 17:29:45 crc kubenswrapper[4875]: E0130 17:29:45.729383 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffccfaf-adf6-49e9-a626-b81376554127" containerName="mariadb-database-create" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.729394 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffccfaf-adf6-49e9-a626-b81376554127" containerName="mariadb-database-create" Jan 30 17:29:45 crc kubenswrapper[4875]: E0130 17:29:45.729405 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5eadec5-b07e-4825-ad38-c41990e4ad98" containerName="mariadb-database-create" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.729411 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5eadec5-b07e-4825-ad38-c41990e4ad98" containerName="mariadb-database-create" Jan 30 17:29:45 crc kubenswrapper[4875]: E0130 17:29:45.729431 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb1a954f-6cce-4ab8-b878-de0c48e9a80d" containerName="mariadb-account-create-update" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.729437 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb1a954f-6cce-4ab8-b878-de0c48e9a80d" containerName="mariadb-account-create-update" Jan 30 17:29:45 crc kubenswrapper[4875]: E0130 17:29:45.729453 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e3597d-60b2-4556-9cf0-994b868f6fa2" containerName="mariadb-account-create-update" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.729458 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e3597d-60b2-4556-9cf0-994b868f6fa2" containerName="mariadb-account-create-update" Jan 30 17:29:45 crc kubenswrapper[4875]: E0130 17:29:45.729472 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06535175-24df-4a19-8892-9936345a6338" containerName="mariadb-database-create" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.729480 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="06535175-24df-4a19-8892-9936345a6338" containerName="mariadb-database-create" Jan 30 17:29:45 crc kubenswrapper[4875]: E0130 17:29:45.729491 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fc551d-b330-4816-9166-fa1e6f145e90" containerName="mariadb-account-create-update" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.729498 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fc551d-b330-4816-9166-fa1e6f145e90" containerName="mariadb-account-create-update" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.729690 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fc551d-b330-4816-9166-fa1e6f145e90" containerName="mariadb-account-create-update" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.729704 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="06535175-24df-4a19-8892-9936345a6338" containerName="mariadb-database-create" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.729715 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e3597d-60b2-4556-9cf0-994b868f6fa2" containerName="mariadb-account-create-update" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.729723 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5eadec5-b07e-4825-ad38-c41990e4ad98" containerName="mariadb-database-create" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.729736 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffccfaf-adf6-49e9-a626-b81376554127" containerName="mariadb-database-create" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.729746 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb1a954f-6cce-4ab8-b878-de0c48e9a80d" containerName="mariadb-account-create-update" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.730249 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.732415 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.733104 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-wlxxk" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.733396 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.741183 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc"] Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.785997 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09788124-6879-4677-83af-a4e8cc11f838-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-6s2rc\" (UID: \"09788124-6879-4677-83af-a4e8cc11f838\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.786067 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09788124-6879-4677-83af-a4e8cc11f838-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-6s2rc\" (UID: \"09788124-6879-4677-83af-a4e8cc11f838\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.786125 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crdfz\" (UniqueName: \"kubernetes.io/projected/09788124-6879-4677-83af-a4e8cc11f838-kube-api-access-crdfz\") pod \"nova-kuttl-cell0-conductor-db-sync-6s2rc\" (UID: \"09788124-6879-4677-83af-a4e8cc11f838\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.887173 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crdfz\" (UniqueName: \"kubernetes.io/projected/09788124-6879-4677-83af-a4e8cc11f838-kube-api-access-crdfz\") pod \"nova-kuttl-cell0-conductor-db-sync-6s2rc\" (UID: \"09788124-6879-4677-83af-a4e8cc11f838\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.887304 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09788124-6879-4677-83af-a4e8cc11f838-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-6s2rc\" (UID: \"09788124-6879-4677-83af-a4e8cc11f838\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.887339 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09788124-6879-4677-83af-a4e8cc11f838-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-6s2rc\" (UID: \"09788124-6879-4677-83af-a4e8cc11f838\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.892332 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09788124-6879-4677-83af-a4e8cc11f838-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-6s2rc\" (UID: \"09788124-6879-4677-83af-a4e8cc11f838\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.898032 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09788124-6879-4677-83af-a4e8cc11f838-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-6s2rc\" (UID: \"09788124-6879-4677-83af-a4e8cc11f838\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.908218 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252"] Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.909235 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.910553 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crdfz\" (UniqueName: \"kubernetes.io/projected/09788124-6879-4677-83af-a4e8cc11f838-kube-api-access-crdfz\") pod \"nova-kuttl-cell0-conductor-db-sync-6s2rc\" (UID: \"09788124-6879-4677-83af-a4e8cc11f838\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.913975 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.914144 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.914191 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.915191 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.917551 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-compute-fake1-compute-config-data" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.922888 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252"] Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.933107 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.988430 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222cd988-6d37-47a7-a67b-bb75d55912f9-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-26252\" (UID: \"222cd988-6d37-47a7-a67b-bb75d55912f9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.988571 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc5a12f2-88b7-4686-a4dd-f681febdbb09-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bc5a12f2-88b7-4686-a4dd-f681febdbb09\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.988630 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222cd988-6d37-47a7-a67b-bb75d55912f9-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-26252\" (UID: \"222cd988-6d37-47a7-a67b-bb75d55912f9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.988666 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s69vp\" (UniqueName: \"kubernetes.io/projected/222cd988-6d37-47a7-a67b-bb75d55912f9-kube-api-access-s69vp\") pod \"nova-kuttl-cell1-conductor-db-sync-26252\" (UID: \"222cd988-6d37-47a7-a67b-bb75d55912f9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" Jan 30 17:29:45 crc kubenswrapper[4875]: I0130 17:29:45.988688 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2jp2\" (UniqueName: \"kubernetes.io/projected/bc5a12f2-88b7-4686-a4dd-f681febdbb09-kube-api-access-b2jp2\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bc5a12f2-88b7-4686-a4dd-f681febdbb09\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.044008 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.045035 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.049357 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.053070 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.062108 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.090632 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc5a12f2-88b7-4686-a4dd-f681febdbb09-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bc5a12f2-88b7-4686-a4dd-f681febdbb09\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.091040 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222cd988-6d37-47a7-a67b-bb75d55912f9-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-26252\" (UID: \"222cd988-6d37-47a7-a67b-bb75d55912f9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.091066 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s69vp\" (UniqueName: \"kubernetes.io/projected/222cd988-6d37-47a7-a67b-bb75d55912f9-kube-api-access-s69vp\") pod \"nova-kuttl-cell1-conductor-db-sync-26252\" (UID: \"222cd988-6d37-47a7-a67b-bb75d55912f9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.091860 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2jp2\" (UniqueName: \"kubernetes.io/projected/bc5a12f2-88b7-4686-a4dd-f681febdbb09-kube-api-access-b2jp2\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bc5a12f2-88b7-4686-a4dd-f681febdbb09\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.092046 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222cd988-6d37-47a7-a67b-bb75d55912f9-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-26252\" (UID: \"222cd988-6d37-47a7-a67b-bb75d55912f9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.096706 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222cd988-6d37-47a7-a67b-bb75d55912f9-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-26252\" (UID: \"222cd988-6d37-47a7-a67b-bb75d55912f9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.098162 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc5a12f2-88b7-4686-a4dd-f681febdbb09-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bc5a12f2-88b7-4686-a4dd-f681febdbb09\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.113399 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222cd988-6d37-47a7-a67b-bb75d55912f9-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-26252\" (UID: \"222cd988-6d37-47a7-a67b-bb75d55912f9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.119156 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2jp2\" (UniqueName: \"kubernetes.io/projected/bc5a12f2-88b7-4686-a4dd-f681febdbb09-kube-api-access-b2jp2\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bc5a12f2-88b7-4686-a4dd-f681febdbb09\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.121790 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s69vp\" (UniqueName: \"kubernetes.io/projected/222cd988-6d37-47a7-a67b-bb75d55912f9-kube-api-access-s69vp\") pod \"nova-kuttl-cell1-conductor-db-sync-26252\" (UID: \"222cd988-6d37-47a7-a67b-bb75d55912f9\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.193844 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dk22\" (UniqueName: \"kubernetes.io/projected/5452c976-86c4-4bc8-8610-f33467f8715c-kube-api-access-9dk22\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5452c976-86c4-4bc8-8610-f33467f8715c\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.193935 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5452c976-86c4-4bc8-8610-f33467f8715c-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5452c976-86c4-4bc8-8610-f33467f8715c\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.280096 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.290821 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.296141 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dk22\" (UniqueName: \"kubernetes.io/projected/5452c976-86c4-4bc8-8610-f33467f8715c-kube-api-access-9dk22\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5452c976-86c4-4bc8-8610-f33467f8715c\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.296227 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5452c976-86c4-4bc8-8610-f33467f8715c-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5452c976-86c4-4bc8-8610-f33467f8715c\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.301933 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5452c976-86c4-4bc8-8610-f33467f8715c-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5452c976-86c4-4bc8-8610-f33467f8715c\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.322353 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dk22\" (UniqueName: \"kubernetes.io/projected/5452c976-86c4-4bc8-8610-f33467f8715c-kube-api-access-9dk22\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5452c976-86c4-4bc8-8610-f33467f8715c\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.367118 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.557789 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc"] Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.752451 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252"] Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.830477 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.839725 4875 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:29:46 crc kubenswrapper[4875]: W0130 17:29:46.899411 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5452c976_86c4_4bc8_8610_f33467f8715c.slice/crio-c252807e8df8c727b5a65229585793127dfbea3ab2a003a32895c8d2845db9a6 WatchSource:0}: Error finding container c252807e8df8c727b5a65229585793127dfbea3ab2a003a32895c8d2845db9a6: Status 404 returned error can't find the container with id c252807e8df8c727b5a65229585793127dfbea3ab2a003a32895c8d2845db9a6 Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.906854 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.969778 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" event={"ID":"222cd988-6d37-47a7-a67b-bb75d55912f9","Type":"ContainerStarted","Data":"348c9809cea5b0835d3a6a39e0b9a76a7319205cc07f3174ae2f8d1fb2dbe029"} Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.969826 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" event={"ID":"222cd988-6d37-47a7-a67b-bb75d55912f9","Type":"ContainerStarted","Data":"14f96a3a8553214f29e1bc525cd7837cba6f060d2f80b1af9e3c238cbfe9aaaf"} Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.973362 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" event={"ID":"09788124-6879-4677-83af-a4e8cc11f838","Type":"ContainerStarted","Data":"d1c70e9e66a5afcf12245057714ca2dd0767c123ca766889d49f554a0578dbd1"} Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.973421 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" event={"ID":"09788124-6879-4677-83af-a4e8cc11f838","Type":"ContainerStarted","Data":"e67151e4e1c67accf232e9c65016cea0e46b57fd1ec06409751a3ed9b24bc211"} Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.975199 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"5452c976-86c4-4bc8-8610-f33467f8715c","Type":"ContainerStarted","Data":"c252807e8df8c727b5a65229585793127dfbea3ab2a003a32895c8d2845db9a6"} Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.976542 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bc5a12f2-88b7-4686-a4dd-f681febdbb09","Type":"ContainerStarted","Data":"fb6e5346ed979cc1e9ce51f5a72925273c7081a332f62963b3c5a9abbf8e8842"} Jan 30 17:29:46 crc kubenswrapper[4875]: I0130 17:29:46.988522 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" podStartSLOduration=1.9885047 podStartE2EDuration="1.9885047s" podCreationTimestamp="2026-01-30 17:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:29:46.983905343 +0000 UTC m=+1997.531268736" watchObservedRunningTime="2026-01-30 17:29:46.9885047 +0000 UTC m=+1997.535868083" Jan 30 17:29:47 crc kubenswrapper[4875]: I0130 17:29:47.008593 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" podStartSLOduration=2.00856344 podStartE2EDuration="2.00856344s" podCreationTimestamp="2026-01-30 17:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:29:47.003662144 +0000 UTC m=+1997.551025537" watchObservedRunningTime="2026-01-30 17:29:47.00856344 +0000 UTC m=+1997.555926823" Jan 30 17:29:47 crc kubenswrapper[4875]: I0130 17:29:47.987969 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"5452c976-86c4-4bc8-8610-f33467f8715c","Type":"ContainerStarted","Data":"8cb5e3fcd22f6993c310c1669c45bbec32d03b17568939d6f0e905f4f8994ff4"} Jan 30 17:29:48 crc kubenswrapper[4875]: I0130 17:29:48.035120 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.03510518 podStartE2EDuration="2.03510518s" podCreationTimestamp="2026-01-30 17:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:29:48.032831417 +0000 UTC m=+1998.580194850" watchObservedRunningTime="2026-01-30 17:29:48.03510518 +0000 UTC m=+1998.582468553" Jan 30 17:29:50 crc kubenswrapper[4875]: I0130 17:29:50.005369 4875 generic.go:334] "Generic (PLEG): container finished" podID="222cd988-6d37-47a7-a67b-bb75d55912f9" containerID="348c9809cea5b0835d3a6a39e0b9a76a7319205cc07f3174ae2f8d1fb2dbe029" exitCode=0 Jan 30 17:29:50 crc kubenswrapper[4875]: I0130 17:29:50.005440 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" event={"ID":"222cd988-6d37-47a7-a67b-bb75d55912f9","Type":"ContainerDied","Data":"348c9809cea5b0835d3a6a39e0b9a76a7319205cc07f3174ae2f8d1fb2dbe029"} Jan 30 17:29:51 crc kubenswrapper[4875]: I0130 17:29:51.367617 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:51 crc kubenswrapper[4875]: I0130 17:29:51.367801 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" Jan 30 17:29:51 crc kubenswrapper[4875]: I0130 17:29:51.511668 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s69vp\" (UniqueName: \"kubernetes.io/projected/222cd988-6d37-47a7-a67b-bb75d55912f9-kube-api-access-s69vp\") pod \"222cd988-6d37-47a7-a67b-bb75d55912f9\" (UID: \"222cd988-6d37-47a7-a67b-bb75d55912f9\") " Jan 30 17:29:51 crc kubenswrapper[4875]: I0130 17:29:51.512059 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222cd988-6d37-47a7-a67b-bb75d55912f9-config-data\") pod \"222cd988-6d37-47a7-a67b-bb75d55912f9\" (UID: \"222cd988-6d37-47a7-a67b-bb75d55912f9\") " Jan 30 17:29:51 crc kubenswrapper[4875]: I0130 17:29:51.512085 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222cd988-6d37-47a7-a67b-bb75d55912f9-scripts\") pod \"222cd988-6d37-47a7-a67b-bb75d55912f9\" (UID: \"222cd988-6d37-47a7-a67b-bb75d55912f9\") " Jan 30 17:29:51 crc kubenswrapper[4875]: I0130 17:29:51.517490 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222cd988-6d37-47a7-a67b-bb75d55912f9-scripts" (OuterVolumeSpecName: "scripts") pod "222cd988-6d37-47a7-a67b-bb75d55912f9" (UID: "222cd988-6d37-47a7-a67b-bb75d55912f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:51 crc kubenswrapper[4875]: I0130 17:29:51.519050 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/222cd988-6d37-47a7-a67b-bb75d55912f9-kube-api-access-s69vp" (OuterVolumeSpecName: "kube-api-access-s69vp") pod "222cd988-6d37-47a7-a67b-bb75d55912f9" (UID: "222cd988-6d37-47a7-a67b-bb75d55912f9"). InnerVolumeSpecName "kube-api-access-s69vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:51 crc kubenswrapper[4875]: I0130 17:29:51.542001 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222cd988-6d37-47a7-a67b-bb75d55912f9-config-data" (OuterVolumeSpecName: "config-data") pod "222cd988-6d37-47a7-a67b-bb75d55912f9" (UID: "222cd988-6d37-47a7-a67b-bb75d55912f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:51 crc kubenswrapper[4875]: I0130 17:29:51.614461 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/222cd988-6d37-47a7-a67b-bb75d55912f9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:51 crc kubenswrapper[4875]: I0130 17:29:51.614501 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/222cd988-6d37-47a7-a67b-bb75d55912f9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:51 crc kubenswrapper[4875]: I0130 17:29:51.614514 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s69vp\" (UniqueName: \"kubernetes.io/projected/222cd988-6d37-47a7-a67b-bb75d55912f9-kube-api-access-s69vp\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.026633 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" event={"ID":"222cd988-6d37-47a7-a67b-bb75d55912f9","Type":"ContainerDied","Data":"14f96a3a8553214f29e1bc525cd7837cba6f060d2f80b1af9e3c238cbfe9aaaf"} Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.026678 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14f96a3a8553214f29e1bc525cd7837cba6f060d2f80b1af9e3c238cbfe9aaaf" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.026754 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.032540 4875 generic.go:334] "Generic (PLEG): container finished" podID="09788124-6879-4677-83af-a4e8cc11f838" containerID="d1c70e9e66a5afcf12245057714ca2dd0767c123ca766889d49f554a0578dbd1" exitCode=0 Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.032621 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" event={"ID":"09788124-6879-4677-83af-a4e8cc11f838","Type":"ContainerDied","Data":"d1c70e9e66a5afcf12245057714ca2dd0767c123ca766889d49f554a0578dbd1"} Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.274339 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:29:52 crc kubenswrapper[4875]: E0130 17:29:52.274791 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222cd988-6d37-47a7-a67b-bb75d55912f9" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.274810 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="222cd988-6d37-47a7-a67b-bb75d55912f9" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.275024 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="222cd988-6d37-47a7-a67b-bb75d55912f9" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.275672 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.283970 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.310206 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.427077 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nccvh\" (UniqueName: \"kubernetes.io/projected/c8259d14-22c2-46fe-ae19-81afd949566d-kube-api-access-nccvh\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"c8259d14-22c2-46fe-ae19-81afd949566d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.427149 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8259d14-22c2-46fe-ae19-81afd949566d-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"c8259d14-22c2-46fe-ae19-81afd949566d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.528741 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8259d14-22c2-46fe-ae19-81afd949566d-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"c8259d14-22c2-46fe-ae19-81afd949566d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.528880 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nccvh\" (UniqueName: \"kubernetes.io/projected/c8259d14-22c2-46fe-ae19-81afd949566d-kube-api-access-nccvh\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"c8259d14-22c2-46fe-ae19-81afd949566d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.533219 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8259d14-22c2-46fe-ae19-81afd949566d-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"c8259d14-22c2-46fe-ae19-81afd949566d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.542824 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nccvh\" (UniqueName: \"kubernetes.io/projected/c8259d14-22c2-46fe-ae19-81afd949566d-kube-api-access-nccvh\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"c8259d14-22c2-46fe-ae19-81afd949566d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:29:52 crc kubenswrapper[4875]: I0130 17:29:52.600098 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:29:56 crc kubenswrapper[4875]: I0130 17:29:56.287434 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:29:56 crc kubenswrapper[4875]: I0130 17:29:56.287969 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:29:56 crc kubenswrapper[4875]: I0130 17:29:56.288011 4875 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 17:29:56 crc kubenswrapper[4875]: I0130 17:29:56.288447 4875 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b766e41a157db7a703015b0504adf1f01b15a6ef061e2f64f148c69531ba279"} pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:29:56 crc kubenswrapper[4875]: I0130 17:29:56.288493 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" containerID="cri-o://8b766e41a157db7a703015b0504adf1f01b15a6ef061e2f64f148c69531ba279" gracePeriod=600 Jan 30 17:29:56 crc kubenswrapper[4875]: I0130 17:29:56.368192 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:56 crc kubenswrapper[4875]: I0130 17:29:56.383235 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:57 crc kubenswrapper[4875]: I0130 17:29:57.084706 4875 generic.go:334] "Generic (PLEG): container finished" podID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerID="8b766e41a157db7a703015b0504adf1f01b15a6ef061e2f64f148c69531ba279" exitCode=0 Jan 30 17:29:57 crc kubenswrapper[4875]: I0130 17:29:57.084760 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerDied","Data":"8b766e41a157db7a703015b0504adf1f01b15a6ef061e2f64f148c69531ba279"} Jan 30 17:29:57 crc kubenswrapper[4875]: I0130 17:29:57.084842 4875 scope.go:117] "RemoveContainer" containerID="229f38d31572af910597a77a6c7031d06b026ccd9058a7b246365185eaaece78" Jan 30 17:29:57 crc kubenswrapper[4875]: I0130 17:29:57.093715 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:29:58 crc kubenswrapper[4875]: I0130 17:29:58.719784 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" Jan 30 17:29:58 crc kubenswrapper[4875]: I0130 17:29:58.834943 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09788124-6879-4677-83af-a4e8cc11f838-config-data\") pod \"09788124-6879-4677-83af-a4e8cc11f838\" (UID: \"09788124-6879-4677-83af-a4e8cc11f838\") " Jan 30 17:29:58 crc kubenswrapper[4875]: I0130 17:29:58.835019 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crdfz\" (UniqueName: \"kubernetes.io/projected/09788124-6879-4677-83af-a4e8cc11f838-kube-api-access-crdfz\") pod \"09788124-6879-4677-83af-a4e8cc11f838\" (UID: \"09788124-6879-4677-83af-a4e8cc11f838\") " Jan 30 17:29:58 crc kubenswrapper[4875]: I0130 17:29:58.835111 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09788124-6879-4677-83af-a4e8cc11f838-scripts\") pod \"09788124-6879-4677-83af-a4e8cc11f838\" (UID: \"09788124-6879-4677-83af-a4e8cc11f838\") " Jan 30 17:29:58 crc kubenswrapper[4875]: I0130 17:29:58.842085 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09788124-6879-4677-83af-a4e8cc11f838-scripts" (OuterVolumeSpecName: "scripts") pod "09788124-6879-4677-83af-a4e8cc11f838" (UID: "09788124-6879-4677-83af-a4e8cc11f838"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:58 crc kubenswrapper[4875]: I0130 17:29:58.842091 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09788124-6879-4677-83af-a4e8cc11f838-kube-api-access-crdfz" (OuterVolumeSpecName: "kube-api-access-crdfz") pod "09788124-6879-4677-83af-a4e8cc11f838" (UID: "09788124-6879-4677-83af-a4e8cc11f838"). InnerVolumeSpecName "kube-api-access-crdfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:58 crc kubenswrapper[4875]: I0130 17:29:58.858030 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09788124-6879-4677-83af-a4e8cc11f838-config-data" (OuterVolumeSpecName: "config-data") pod "09788124-6879-4677-83af-a4e8cc11f838" (UID: "09788124-6879-4677-83af-a4e8cc11f838"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:58 crc kubenswrapper[4875]: I0130 17:29:58.936541 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09788124-6879-4677-83af-a4e8cc11f838-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:58 crc kubenswrapper[4875]: I0130 17:29:58.936572 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crdfz\" (UniqueName: \"kubernetes.io/projected/09788124-6879-4677-83af-a4e8cc11f838-kube-api-access-crdfz\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:58 crc kubenswrapper[4875]: I0130 17:29:58.936585 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09788124-6879-4677-83af-a4e8cc11f838-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:59 crc kubenswrapper[4875]: I0130 17:29:59.039191 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:29:59 crc kubenswrapper[4875]: I0130 17:29:59.124737 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"c8259d14-22c2-46fe-ae19-81afd949566d","Type":"ContainerStarted","Data":"eb1c0bc3e22d90408224b3183cd0118bbf35148cae8d403a53754598c977b8e2"} Jan 30 17:29:59 crc kubenswrapper[4875]: I0130 17:29:59.126849 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" event={"ID":"09788124-6879-4677-83af-a4e8cc11f838","Type":"ContainerDied","Data":"e67151e4e1c67accf232e9c65016cea0e46b57fd1ec06409751a3ed9b24bc211"} Jan 30 17:29:59 crc kubenswrapper[4875]: I0130 17:29:59.126918 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc" Jan 30 17:29:59 crc kubenswrapper[4875]: I0130 17:29:59.126923 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e67151e4e1c67accf232e9c65016cea0e46b57fd1ec06409751a3ed9b24bc211" Jan 30 17:29:59 crc kubenswrapper[4875]: I0130 17:29:59.803503 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:29:59 crc kubenswrapper[4875]: E0130 17:29:59.804131 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09788124-6879-4677-83af-a4e8cc11f838" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 30 17:29:59 crc kubenswrapper[4875]: I0130 17:29:59.804146 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="09788124-6879-4677-83af-a4e8cc11f838" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 30 17:29:59 crc kubenswrapper[4875]: I0130 17:29:59.804300 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="09788124-6879-4677-83af-a4e8cc11f838" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 30 17:29:59 crc kubenswrapper[4875]: I0130 17:29:59.804821 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:29:59 crc kubenswrapper[4875]: I0130 17:29:59.807623 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 30 17:29:59 crc kubenswrapper[4875]: I0130 17:29:59.853897 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:29:59 crc kubenswrapper[4875]: I0130 17:29:59.952255 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b77110-37aa-4395-9028-e4c8bbad8515-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"e0b77110-37aa-4395-9028-e4c8bbad8515\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:29:59 crc kubenswrapper[4875]: I0130 17:29:59.952561 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btr2p\" (UniqueName: \"kubernetes.io/projected/e0b77110-37aa-4395-9028-e4c8bbad8515-kube-api-access-btr2p\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"e0b77110-37aa-4395-9028-e4c8bbad8515\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.053755 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b77110-37aa-4395-9028-e4c8bbad8515-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"e0b77110-37aa-4395-9028-e4c8bbad8515\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.053815 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btr2p\" (UniqueName: \"kubernetes.io/projected/e0b77110-37aa-4395-9028-e4c8bbad8515-kube-api-access-btr2p\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"e0b77110-37aa-4395-9028-e4c8bbad8515\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.059457 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b77110-37aa-4395-9028-e4c8bbad8515-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"e0b77110-37aa-4395-9028-e4c8bbad8515\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.072377 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btr2p\" (UniqueName: \"kubernetes.io/projected/e0b77110-37aa-4395-9028-e4c8bbad8515-kube-api-access-btr2p\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"e0b77110-37aa-4395-9028-e4c8bbad8515\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.120441 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.128133 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw"] Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.129355 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.133151 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.133199 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.158858 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw"] Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.170163 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerStarted","Data":"704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52"} Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.172343 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"c8259d14-22c2-46fe-ae19-81afd949566d","Type":"ContainerStarted","Data":"7c793348685e3d30ed2d2f6e6f8ba817bd0518cbe0bb405782d1d5a46d91ac42"} Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.173457 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.200253 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bc5a12f2-88b7-4686-a4dd-f681febdbb09","Type":"ContainerStarted","Data":"5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26"} Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.203534 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.222947 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=8.222928073 podStartE2EDuration="8.222928073s" podCreationTimestamp="2026-01-30 17:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:00.217475169 +0000 UTC m=+2010.764838552" watchObservedRunningTime="2026-01-30 17:30:00.222928073 +0000 UTC m=+2010.770291456" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.250533 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.257237 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-config-volume\") pod \"collect-profiles-29496570-cs5lw\" (UID: \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.257293 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8dl7\" (UniqueName: \"kubernetes.io/projected/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-kube-api-access-s8dl7\") pod \"collect-profiles-29496570-cs5lw\" (UID: \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.257364 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-secret-volume\") pod \"collect-profiles-29496570-cs5lw\" (UID: \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.257382 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podStartSLOduration=3.052269229 podStartE2EDuration="15.257362152s" podCreationTimestamp="2026-01-30 17:29:45 +0000 UTC" firstStartedPulling="2026-01-30 17:29:46.839483436 +0000 UTC m=+1997.386846809" lastFinishedPulling="2026-01-30 17:29:59.044576349 +0000 UTC m=+2009.591939732" observedRunningTime="2026-01-30 17:30:00.240892487 +0000 UTC m=+2010.788255890" watchObservedRunningTime="2026-01-30 17:30:00.257362152 +0000 UTC m=+2010.804725535" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.359106 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8dl7\" (UniqueName: \"kubernetes.io/projected/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-kube-api-access-s8dl7\") pod \"collect-profiles-29496570-cs5lw\" (UID: \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.359182 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-secret-volume\") pod \"collect-profiles-29496570-cs5lw\" (UID: \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.359324 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-config-volume\") pod \"collect-profiles-29496570-cs5lw\" (UID: \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.360187 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-config-volume\") pod \"collect-profiles-29496570-cs5lw\" (UID: \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.363796 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-secret-volume\") pod \"collect-profiles-29496570-cs5lw\" (UID: \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.375921 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8dl7\" (UniqueName: \"kubernetes.io/projected/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-kube-api-access-s8dl7\") pod \"collect-profiles-29496570-cs5lw\" (UID: \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.522666 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.582550 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:30:00 crc kubenswrapper[4875]: W0130 17:30:00.948416 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ef46e98_ebf5_4a8c_aa36_7a3d8e45ad4a.slice/crio-d810ba05b60741e537b04f3b813d88454d57d7dc0c903a6491dd29d54a39b202 WatchSource:0}: Error finding container d810ba05b60741e537b04f3b813d88454d57d7dc0c903a6491dd29d54a39b202: Status 404 returned error can't find the container with id d810ba05b60741e537b04f3b813d88454d57d7dc0c903a6491dd29d54a39b202 Jan 30 17:30:00 crc kubenswrapper[4875]: I0130 17:30:00.955323 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw"] Jan 30 17:30:01 crc kubenswrapper[4875]: I0130 17:30:01.208873 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" event={"ID":"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a","Type":"ContainerStarted","Data":"dd55a6dc4bed4d9d9777aafc7286c44aa97a7d975ad34786650c16dbdabf757d"} Jan 30 17:30:01 crc kubenswrapper[4875]: I0130 17:30:01.209213 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" event={"ID":"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a","Type":"ContainerStarted","Data":"d810ba05b60741e537b04f3b813d88454d57d7dc0c903a6491dd29d54a39b202"} Jan 30 17:30:01 crc kubenswrapper[4875]: I0130 17:30:01.210873 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"e0b77110-37aa-4395-9028-e4c8bbad8515","Type":"ContainerStarted","Data":"fe5f432383d824e223eceb3c4c1c95d2cdf30bccbb3e20ab48339265253e476f"} Jan 30 17:30:01 crc kubenswrapper[4875]: I0130 17:30:01.210925 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"e0b77110-37aa-4395-9028-e4c8bbad8515","Type":"ContainerStarted","Data":"02dd79b997abb8da8ee6a78c3310b487555c1d5ffd032dd268040f580239f4b8"} Jan 30 17:30:01 crc kubenswrapper[4875]: I0130 17:30:01.211754 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:30:01 crc kubenswrapper[4875]: I0130 17:30:01.251461 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" podStartSLOduration=1.251439057 podStartE2EDuration="1.251439057s" podCreationTimestamp="2026-01-30 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:01.237291155 +0000 UTC m=+2011.784654548" watchObservedRunningTime="2026-01-30 17:30:01.251439057 +0000 UTC m=+2011.798802440" Jan 30 17:30:01 crc kubenswrapper[4875]: I0130 17:30:01.253116 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.253100789 podStartE2EDuration="2.253100789s" podCreationTimestamp="2026-01-30 17:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:01.25122705 +0000 UTC m=+2011.798590433" watchObservedRunningTime="2026-01-30 17:30:01.253100789 +0000 UTC m=+2011.800464182" Jan 30 17:30:02 crc kubenswrapper[4875]: I0130 17:30:02.222144 4875 generic.go:334] "Generic (PLEG): container finished" podID="5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a" containerID="dd55a6dc4bed4d9d9777aafc7286c44aa97a7d975ad34786650c16dbdabf757d" exitCode=0 Jan 30 17:30:02 crc kubenswrapper[4875]: I0130 17:30:02.222240 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" event={"ID":"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a","Type":"ContainerDied","Data":"dd55a6dc4bed4d9d9777aafc7286c44aa97a7d975ad34786650c16dbdabf757d"} Jan 30 17:30:03 crc kubenswrapper[4875]: I0130 17:30:03.522488 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" Jan 30 17:30:03 crc kubenswrapper[4875]: I0130 17:30:03.607085 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-secret-volume\") pod \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\" (UID: \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\") " Jan 30 17:30:03 crc kubenswrapper[4875]: I0130 17:30:03.607475 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8dl7\" (UniqueName: \"kubernetes.io/projected/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-kube-api-access-s8dl7\") pod \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\" (UID: \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\") " Jan 30 17:30:03 crc kubenswrapper[4875]: I0130 17:30:03.607668 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-config-volume\") pod \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\" (UID: \"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a\") " Jan 30 17:30:03 crc kubenswrapper[4875]: I0130 17:30:03.608670 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-config-volume" (OuterVolumeSpecName: "config-volume") pod "5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a" (UID: "5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:30:03 crc kubenswrapper[4875]: I0130 17:30:03.614868 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-kube-api-access-s8dl7" (OuterVolumeSpecName: "kube-api-access-s8dl7") pod "5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a" (UID: "5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a"). InnerVolumeSpecName "kube-api-access-s8dl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:03 crc kubenswrapper[4875]: I0130 17:30:03.615696 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a" (UID: "5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:03 crc kubenswrapper[4875]: I0130 17:30:03.709496 4875 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:03 crc kubenswrapper[4875]: I0130 17:30:03.709536 4875 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:03 crc kubenswrapper[4875]: I0130 17:30:03.709553 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8dl7\" (UniqueName: \"kubernetes.io/projected/5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a-kube-api-access-s8dl7\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:04 crc kubenswrapper[4875]: I0130 17:30:04.238985 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" event={"ID":"5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a","Type":"ContainerDied","Data":"d810ba05b60741e537b04f3b813d88454d57d7dc0c903a6491dd29d54a39b202"} Jan 30 17:30:04 crc kubenswrapper[4875]: I0130 17:30:04.239025 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d810ba05b60741e537b04f3b813d88454d57d7dc0c903a6491dd29d54a39b202" Jan 30 17:30:04 crc kubenswrapper[4875]: I0130 17:30:04.239256 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-cs5lw" Jan 30 17:30:04 crc kubenswrapper[4875]: I0130 17:30:04.285450 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt"] Jan 30 17:30:04 crc kubenswrapper[4875]: I0130 17:30:04.291969 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-tcxvt"] Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.142404 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.598944 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c"] Jan 30 17:30:05 crc kubenswrapper[4875]: E0130 17:30:05.599242 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a" containerName="collect-profiles" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.599256 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a" containerName="collect-profiles" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.599416 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ef46e98-ebf5-4a8c-aa36-7a3d8e45ad4a" containerName="collect-profiles" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.599949 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.605967 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.606095 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.611250 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c"] Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.639011 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5210e64b-5ccb-46aa-9797-f42f13d13eab-scripts\") pod \"nova-kuttl-cell0-cell-mapping-kfj5c\" (UID: \"5210e64b-5ccb-46aa-9797-f42f13d13eab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.639093 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfxwx\" (UniqueName: \"kubernetes.io/projected/5210e64b-5ccb-46aa-9797-f42f13d13eab-kube-api-access-rfxwx\") pod \"nova-kuttl-cell0-cell-mapping-kfj5c\" (UID: \"5210e64b-5ccb-46aa-9797-f42f13d13eab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.639157 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5210e64b-5ccb-46aa-9797-f42f13d13eab-config-data\") pod \"nova-kuttl-cell0-cell-mapping-kfj5c\" (UID: \"5210e64b-5ccb-46aa-9797-f42f13d13eab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.713536 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.715170 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:05 crc kubenswrapper[4875]: W0130 17:30:05.717954 4875 reflector.go:561] object-"nova-kuttl-default"/"nova-kuttl-api-config-data": failed to list *v1.Secret: secrets "nova-kuttl-api-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "nova-kuttl-default": no relationship found between node 'crc' and this object Jan 30 17:30:05 crc kubenswrapper[4875]: E0130 17:30:05.718009 4875 reflector.go:158] "Unhandled Error" err="object-\"nova-kuttl-default\"/\"nova-kuttl-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-kuttl-api-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"nova-kuttl-default\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.723093 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.740066 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5210e64b-5ccb-46aa-9797-f42f13d13eab-scripts\") pod \"nova-kuttl-cell0-cell-mapping-kfj5c\" (UID: \"5210e64b-5ccb-46aa-9797-f42f13d13eab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.740128 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfxwx\" (UniqueName: \"kubernetes.io/projected/5210e64b-5ccb-46aa-9797-f42f13d13eab-kube-api-access-rfxwx\") pod \"nova-kuttl-cell0-cell-mapping-kfj5c\" (UID: \"5210e64b-5ccb-46aa-9797-f42f13d13eab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.740188 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5210e64b-5ccb-46aa-9797-f42f13d13eab-config-data\") pod \"nova-kuttl-cell0-cell-mapping-kfj5c\" (UID: \"5210e64b-5ccb-46aa-9797-f42f13d13eab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.745303 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5210e64b-5ccb-46aa-9797-f42f13d13eab-scripts\") pod \"nova-kuttl-cell0-cell-mapping-kfj5c\" (UID: \"5210e64b-5ccb-46aa-9797-f42f13d13eab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.755376 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfxwx\" (UniqueName: \"kubernetes.io/projected/5210e64b-5ccb-46aa-9797-f42f13d13eab-kube-api-access-rfxwx\") pod \"nova-kuttl-cell0-cell-mapping-kfj5c\" (UID: \"5210e64b-5ccb-46aa-9797-f42f13d13eab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.758760 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5210e64b-5ccb-46aa-9797-f42f13d13eab-config-data\") pod \"nova-kuttl-cell0-cell-mapping-kfj5c\" (UID: \"5210e64b-5ccb-46aa-9797-f42f13d13eab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.830352 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.840699 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.841126 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-config-data\") pod \"nova-kuttl-api-0\" (UID: \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.841175 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-logs\") pod \"nova-kuttl-api-0\" (UID: \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.841193 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tkg5\" (UniqueName: \"kubernetes.io/projected/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-kube-api-access-7tkg5\") pod \"nova-kuttl-api-0\" (UID: \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.843744 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.844368 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.850850 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.852116 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.854904 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.868733 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.903350 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dq4ms"] Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.907051 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.913568 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dq4ms"] Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.918476 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.942396 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-logs\") pod \"nova-kuttl-api-0\" (UID: \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.942436 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tkg5\" (UniqueName: \"kubernetes.io/projected/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-kube-api-access-7tkg5\") pod \"nova-kuttl-api-0\" (UID: \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.942467 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/057a8d5b-be16-42b8-99fe-5ec8eee230ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.942509 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0edba1d-9578-4bed-abfa-c6625e8f942a-utilities\") pod \"community-operators-dq4ms\" (UID: \"f0edba1d-9578-4bed-abfa-c6625e8f942a\") " pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.942531 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwf92\" (UniqueName: \"kubernetes.io/projected/61de0af0-81c4-4301-93e5-834b87113ae6-kube-api-access-wwf92\") pod \"nova-kuttl-scheduler-0\" (UID: \"61de0af0-81c4-4301-93e5-834b87113ae6\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.942556 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lgzl\" (UniqueName: \"kubernetes.io/projected/057a8d5b-be16-42b8-99fe-5ec8eee230ed-kube-api-access-8lgzl\") pod \"nova-kuttl-metadata-0\" (UID: \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.942597 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057a8d5b-be16-42b8-99fe-5ec8eee230ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.942622 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk52t\" (UniqueName: \"kubernetes.io/projected/f0edba1d-9578-4bed-abfa-c6625e8f942a-kube-api-access-mk52t\") pod \"community-operators-dq4ms\" (UID: \"f0edba1d-9578-4bed-abfa-c6625e8f942a\") " pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.942640 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0edba1d-9578-4bed-abfa-c6625e8f942a-catalog-content\") pod \"community-operators-dq4ms\" (UID: \"f0edba1d-9578-4bed-abfa-c6625e8f942a\") " pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.942660 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de0af0-81c4-4301-93e5-834b87113ae6-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"61de0af0-81c4-4301-93e5-834b87113ae6\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.942677 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-config-data\") pod \"nova-kuttl-api-0\" (UID: \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.943082 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-logs\") pod \"nova-kuttl-api-0\" (UID: \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:05 crc kubenswrapper[4875]: I0130 17:30:05.958607 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tkg5\" (UniqueName: \"kubernetes.io/projected/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-kube-api-access-7tkg5\") pod \"nova-kuttl-api-0\" (UID: \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.044390 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0edba1d-9578-4bed-abfa-c6625e8f942a-utilities\") pod \"community-operators-dq4ms\" (UID: \"f0edba1d-9578-4bed-abfa-c6625e8f942a\") " pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.044433 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwf92\" (UniqueName: \"kubernetes.io/projected/61de0af0-81c4-4301-93e5-834b87113ae6-kube-api-access-wwf92\") pod \"nova-kuttl-scheduler-0\" (UID: \"61de0af0-81c4-4301-93e5-834b87113ae6\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.044462 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lgzl\" (UniqueName: \"kubernetes.io/projected/057a8d5b-be16-42b8-99fe-5ec8eee230ed-kube-api-access-8lgzl\") pod \"nova-kuttl-metadata-0\" (UID: \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.044501 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057a8d5b-be16-42b8-99fe-5ec8eee230ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.044531 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk52t\" (UniqueName: \"kubernetes.io/projected/f0edba1d-9578-4bed-abfa-c6625e8f942a-kube-api-access-mk52t\") pod \"community-operators-dq4ms\" (UID: \"f0edba1d-9578-4bed-abfa-c6625e8f942a\") " pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.044547 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0edba1d-9578-4bed-abfa-c6625e8f942a-catalog-content\") pod \"community-operators-dq4ms\" (UID: \"f0edba1d-9578-4bed-abfa-c6625e8f942a\") " pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.044572 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de0af0-81c4-4301-93e5-834b87113ae6-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"61de0af0-81c4-4301-93e5-834b87113ae6\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.044669 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/057a8d5b-be16-42b8-99fe-5ec8eee230ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.045064 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/057a8d5b-be16-42b8-99fe-5ec8eee230ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.045394 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0edba1d-9578-4bed-abfa-c6625e8f942a-utilities\") pod \"community-operators-dq4ms\" (UID: \"f0edba1d-9578-4bed-abfa-c6625e8f942a\") " pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.046204 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0edba1d-9578-4bed-abfa-c6625e8f942a-catalog-content\") pod \"community-operators-dq4ms\" (UID: \"f0edba1d-9578-4bed-abfa-c6625e8f942a\") " pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.050445 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057a8d5b-be16-42b8-99fe-5ec8eee230ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.060290 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de0af0-81c4-4301-93e5-834b87113ae6-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"61de0af0-81c4-4301-93e5-834b87113ae6\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.067545 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwf92\" (UniqueName: \"kubernetes.io/projected/61de0af0-81c4-4301-93e5-834b87113ae6-kube-api-access-wwf92\") pod \"nova-kuttl-scheduler-0\" (UID: \"61de0af0-81c4-4301-93e5-834b87113ae6\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.067889 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lgzl\" (UniqueName: \"kubernetes.io/projected/057a8d5b-be16-42b8-99fe-5ec8eee230ed-kube-api-access-8lgzl\") pod \"nova-kuttl-metadata-0\" (UID: \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.077195 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk52t\" (UniqueName: \"kubernetes.io/projected/f0edba1d-9578-4bed-abfa-c6625e8f942a-kube-api-access-mk52t\") pod \"community-operators-dq4ms\" (UID: \"f0edba1d-9578-4bed-abfa-c6625e8f942a\") " pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.154214 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94dc77e6-c491-4bda-a95f-6ab4892d06db" path="/var/lib/kubelet/pods/94dc77e6-c491-4bda-a95f-6ab4892d06db/volumes" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.170663 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.185252 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.227276 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.492155 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c"] Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.636245 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:30:06 crc kubenswrapper[4875]: W0130 17:30:06.638773 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61de0af0_81c4_4301_93e5_834b87113ae6.slice/crio-793d99064dbee442f3fd58cb6e0d48ac22ce951ccb537101103fe5bae5f65edd WatchSource:0}: Error finding container 793d99064dbee442f3fd58cb6e0d48ac22ce951ccb537101103fe5bae5f65edd: Status 404 returned error can't find the container with id 793d99064dbee442f3fd58cb6e0d48ac22ce951ccb537101103fe5bae5f65edd Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.643361 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.803151 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dq4ms"] Jan 30 17:30:06 crc kubenswrapper[4875]: E0130 17:30:06.943738 4875 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-api-config-data: failed to sync secret cache: timed out waiting for the condition Jan 30 17:30:06 crc kubenswrapper[4875]: E0130 17:30:06.944020 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-config-data podName:36db59a3-da9e-4ad8-a2f6-abf638ec7e91 nodeName:}" failed. No retries permitted until 2026-01-30 17:30:07.444001019 +0000 UTC m=+2017.991364402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-config-data") pod "nova-kuttl-api-0" (UID: "36db59a3-da9e-4ad8-a2f6-abf638ec7e91") : failed to sync secret cache: timed out waiting for the condition Jan 30 17:30:06 crc kubenswrapper[4875]: I0130 17:30:06.952803 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.267106 4875 generic.go:334] "Generic (PLEG): container finished" podID="f0edba1d-9578-4bed-abfa-c6625e8f942a" containerID="0b56ecf0f24c2095d9999b4495359d97ea676f34f6fe73f2ed7705f1c591733c" exitCode=0 Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.267156 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq4ms" event={"ID":"f0edba1d-9578-4bed-abfa-c6625e8f942a","Type":"ContainerDied","Data":"0b56ecf0f24c2095d9999b4495359d97ea676f34f6fe73f2ed7705f1c591733c"} Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.267203 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq4ms" event={"ID":"f0edba1d-9578-4bed-abfa-c6625e8f942a","Type":"ContainerStarted","Data":"a8bb062394b1dba8bcab80edb0ca7de61155a4e65a12f8aeec81befab862e9ff"} Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.271091 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"61de0af0-81c4-4301-93e5-834b87113ae6","Type":"ContainerStarted","Data":"54a322846ffdc6834ae292967a2249fcff80a7b6592d4e69e6e194caa8cc68c5"} Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.271133 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"61de0af0-81c4-4301-93e5-834b87113ae6","Type":"ContainerStarted","Data":"793d99064dbee442f3fd58cb6e0d48ac22ce951ccb537101103fe5bae5f65edd"} Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.274802 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" event={"ID":"5210e64b-5ccb-46aa-9797-f42f13d13eab","Type":"ContainerStarted","Data":"767d982e7af4d83c650086b701a2fa9f9a5089fc861ca8cd8afe522f243d9970"} Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.274841 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" event={"ID":"5210e64b-5ccb-46aa-9797-f42f13d13eab","Type":"ContainerStarted","Data":"1a9ad5c62a6bb7a2f51ae83647370974d95a1b9dde871041ed068b7daaf2cb1c"} Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.276439 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"057a8d5b-be16-42b8-99fe-5ec8eee230ed","Type":"ContainerStarted","Data":"6d27ae8cc2db33d75d5fbcd529842ce3cf41f43b0ec31b4bfdfdc02d8d56150d"} Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.276483 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"057a8d5b-be16-42b8-99fe-5ec8eee230ed","Type":"ContainerStarted","Data":"43d4060f287b66a3a82f4a475cd54d01e9fcd295c2f1afb79b4baf9a869c9c31"} Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.276498 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"057a8d5b-be16-42b8-99fe-5ec8eee230ed","Type":"ContainerStarted","Data":"0d96ac92f7f7964f3f703d2f8ae9d3f63cf8af4d538b4790120b6536ce650686"} Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.308188 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kcdwk"] Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.310336 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.330997 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.330978825 podStartE2EDuration="2.330978825s" podCreationTimestamp="2026-01-30 17:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:07.307036252 +0000 UTC m=+2017.854399645" watchObservedRunningTime="2026-01-30 17:30:07.330978825 +0000 UTC m=+2017.878342208" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.331714 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcdwk"] Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.343881 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.343858336 podStartE2EDuration="2.343858336s" podCreationTimestamp="2026-01-30 17:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:07.32923831 +0000 UTC m=+2017.876601683" watchObservedRunningTime="2026-01-30 17:30:07.343858336 +0000 UTC m=+2017.891221729" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.348575 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" podStartSLOduration=2.348559156 podStartE2EDuration="2.348559156s" podCreationTimestamp="2026-01-30 17:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:07.341869003 +0000 UTC m=+2017.889232396" watchObservedRunningTime="2026-01-30 17:30:07.348559156 +0000 UTC m=+2017.895922539" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.373345 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867bde8d-d540-459b-a0ee-90ee2eb735ef-catalog-content\") pod \"redhat-marketplace-kcdwk\" (UID: \"867bde8d-d540-459b-a0ee-90ee2eb735ef\") " pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.373493 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x77q9\" (UniqueName: \"kubernetes.io/projected/867bde8d-d540-459b-a0ee-90ee2eb735ef-kube-api-access-x77q9\") pod \"redhat-marketplace-kcdwk\" (UID: \"867bde8d-d540-459b-a0ee-90ee2eb735ef\") " pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.373813 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867bde8d-d540-459b-a0ee-90ee2eb735ef-utilities\") pod \"redhat-marketplace-kcdwk\" (UID: \"867bde8d-d540-459b-a0ee-90ee2eb735ef\") " pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.475354 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867bde8d-d540-459b-a0ee-90ee2eb735ef-catalog-content\") pod \"redhat-marketplace-kcdwk\" (UID: \"867bde8d-d540-459b-a0ee-90ee2eb735ef\") " pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.475568 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x77q9\" (UniqueName: \"kubernetes.io/projected/867bde8d-d540-459b-a0ee-90ee2eb735ef-kube-api-access-x77q9\") pod \"redhat-marketplace-kcdwk\" (UID: \"867bde8d-d540-459b-a0ee-90ee2eb735ef\") " pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.475691 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867bde8d-d540-459b-a0ee-90ee2eb735ef-utilities\") pod \"redhat-marketplace-kcdwk\" (UID: \"867bde8d-d540-459b-a0ee-90ee2eb735ef\") " pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.475734 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-config-data\") pod \"nova-kuttl-api-0\" (UID: \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.475842 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867bde8d-d540-459b-a0ee-90ee2eb735ef-catalog-content\") pod \"redhat-marketplace-kcdwk\" (UID: \"867bde8d-d540-459b-a0ee-90ee2eb735ef\") " pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.476181 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867bde8d-d540-459b-a0ee-90ee2eb735ef-utilities\") pod \"redhat-marketplace-kcdwk\" (UID: \"867bde8d-d540-459b-a0ee-90ee2eb735ef\") " pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.490509 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-config-data\") pod \"nova-kuttl-api-0\" (UID: \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.497359 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x77q9\" (UniqueName: \"kubernetes.io/projected/867bde8d-d540-459b-a0ee-90ee2eb735ef-kube-api-access-x77q9\") pod \"redhat-marketplace-kcdwk\" (UID: \"867bde8d-d540-459b-a0ee-90ee2eb735ef\") " pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.531331 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.637422 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.642791 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:30:07 crc kubenswrapper[4875]: I0130 17:30:07.979105 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.114270 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcdwk"] Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.203325 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk"] Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.204557 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.215003 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.215164 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.237320 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq"] Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.238807 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.287417 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"36db59a3-da9e-4ad8-a2f6-abf638ec7e91","Type":"ContainerStarted","Data":"7a0eaa753994c54943c3e02c7101df072d4dc78aed6aa4a15362bfd5ab7f8196"} Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.287464 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"36db59a3-da9e-4ad8-a2f6-abf638ec7e91","Type":"ContainerStarted","Data":"416c46b056abfd0dd904170032df23ab33cb3b6bd89f35c4ea78099e209d3a30"} Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.293889 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4352d42d-6f43-4899-95e8-cd45c91c2a6e-config-data\") pod \"nova-kuttl-cell1-host-discover-czqhq\" (UID: \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.293954 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlhks\" (UniqueName: \"kubernetes.io/projected/220f50f1-8337-455d-b973-24e9d7b1917c-kube-api-access-wlhks\") pod \"nova-kuttl-cell1-cell-mapping-789gk\" (UID: \"220f50f1-8337-455d-b973-24e9d7b1917c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.293996 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp7s2\" (UniqueName: \"kubernetes.io/projected/4352d42d-6f43-4899-95e8-cd45c91c2a6e-kube-api-access-bp7s2\") pod \"nova-kuttl-cell1-host-discover-czqhq\" (UID: \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.294049 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4352d42d-6f43-4899-95e8-cd45c91c2a6e-scripts\") pod \"nova-kuttl-cell1-host-discover-czqhq\" (UID: \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.294103 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/220f50f1-8337-455d-b973-24e9d7b1917c-scripts\") pod \"nova-kuttl-cell1-cell-mapping-789gk\" (UID: \"220f50f1-8337-455d-b973-24e9d7b1917c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.294140 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220f50f1-8337-455d-b973-24e9d7b1917c-config-data\") pod \"nova-kuttl-cell1-cell-mapping-789gk\" (UID: \"220f50f1-8337-455d-b973-24e9d7b1917c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.294244 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk"] Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.298745 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq4ms" event={"ID":"f0edba1d-9578-4bed-abfa-c6625e8f942a","Type":"ContainerStarted","Data":"6db66f51136ea6fbc9298a3af669583a96ecbf74946867e23baea2c857d067c6"} Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.306164 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq"] Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.320314 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcdwk" event={"ID":"867bde8d-d540-459b-a0ee-90ee2eb735ef","Type":"ContainerStarted","Data":"0f09e2dc8f6b9ee9bb23bb7c2c527972d47a1dc330f85c78025cba6e3a02ce6b"} Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.399301 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp7s2\" (UniqueName: \"kubernetes.io/projected/4352d42d-6f43-4899-95e8-cd45c91c2a6e-kube-api-access-bp7s2\") pod \"nova-kuttl-cell1-host-discover-czqhq\" (UID: \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.399460 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4352d42d-6f43-4899-95e8-cd45c91c2a6e-scripts\") pod \"nova-kuttl-cell1-host-discover-czqhq\" (UID: \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.399580 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/220f50f1-8337-455d-b973-24e9d7b1917c-scripts\") pod \"nova-kuttl-cell1-cell-mapping-789gk\" (UID: \"220f50f1-8337-455d-b973-24e9d7b1917c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.399676 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220f50f1-8337-455d-b973-24e9d7b1917c-config-data\") pod \"nova-kuttl-cell1-cell-mapping-789gk\" (UID: \"220f50f1-8337-455d-b973-24e9d7b1917c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.399849 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4352d42d-6f43-4899-95e8-cd45c91c2a6e-config-data\") pod \"nova-kuttl-cell1-host-discover-czqhq\" (UID: \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.399916 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlhks\" (UniqueName: \"kubernetes.io/projected/220f50f1-8337-455d-b973-24e9d7b1917c-kube-api-access-wlhks\") pod \"nova-kuttl-cell1-cell-mapping-789gk\" (UID: \"220f50f1-8337-455d-b973-24e9d7b1917c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.405046 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4352d42d-6f43-4899-95e8-cd45c91c2a6e-config-data\") pod \"nova-kuttl-cell1-host-discover-czqhq\" (UID: \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.406111 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4352d42d-6f43-4899-95e8-cd45c91c2a6e-scripts\") pod \"nova-kuttl-cell1-host-discover-czqhq\" (UID: \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.406358 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/220f50f1-8337-455d-b973-24e9d7b1917c-scripts\") pod \"nova-kuttl-cell1-cell-mapping-789gk\" (UID: \"220f50f1-8337-455d-b973-24e9d7b1917c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.407175 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220f50f1-8337-455d-b973-24e9d7b1917c-config-data\") pod \"nova-kuttl-cell1-cell-mapping-789gk\" (UID: \"220f50f1-8337-455d-b973-24e9d7b1917c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.415472 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlhks\" (UniqueName: \"kubernetes.io/projected/220f50f1-8337-455d-b973-24e9d7b1917c-kube-api-access-wlhks\") pod \"nova-kuttl-cell1-cell-mapping-789gk\" (UID: \"220f50f1-8337-455d-b973-24e9d7b1917c\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.419000 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp7s2\" (UniqueName: \"kubernetes.io/projected/4352d42d-6f43-4899-95e8-cd45c91c2a6e-kube-api-access-bp7s2\") pod \"nova-kuttl-cell1-host-discover-czqhq\" (UID: \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.572947 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" Jan 30 17:30:08 crc kubenswrapper[4875]: I0130 17:30:08.585106 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" Jan 30 17:30:09 crc kubenswrapper[4875]: W0130 17:30:09.049262 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod220f50f1_8337_455d_b973_24e9d7b1917c.slice/crio-1764b4a5fe73bb174a66b55dee964f038e15b919af09c927a419a11c8e66a3d1 WatchSource:0}: Error finding container 1764b4a5fe73bb174a66b55dee964f038e15b919af09c927a419a11c8e66a3d1: Status 404 returned error can't find the container with id 1764b4a5fe73bb174a66b55dee964f038e15b919af09c927a419a11c8e66a3d1 Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.055917 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk"] Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.117800 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq"] Jan 30 17:30:09 crc kubenswrapper[4875]: W0130 17:30:09.122408 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4352d42d_6f43_4899_95e8_cd45c91c2a6e.slice/crio-f5fe3b6f28b25300795baec894743411009b7ba0d0a7b60c2431502f8f198643 WatchSource:0}: Error finding container f5fe3b6f28b25300795baec894743411009b7ba0d0a7b60c2431502f8f198643: Status 404 returned error can't find the container with id f5fe3b6f28b25300795baec894743411009b7ba0d0a7b60c2431502f8f198643 Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.345347 4875 generic.go:334] "Generic (PLEG): container finished" podID="f0edba1d-9578-4bed-abfa-c6625e8f942a" containerID="6db66f51136ea6fbc9298a3af669583a96ecbf74946867e23baea2c857d067c6" exitCode=0 Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.347733 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq4ms" event={"ID":"f0edba1d-9578-4bed-abfa-c6625e8f942a","Type":"ContainerDied","Data":"6db66f51136ea6fbc9298a3af669583a96ecbf74946867e23baea2c857d067c6"} Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.349728 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" event={"ID":"4352d42d-6f43-4899-95e8-cd45c91c2a6e","Type":"ContainerStarted","Data":"1cb971fcc00cf431468aae3ec808d9c56c04576350576b1f528a7cf2de6a0059"} Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.349768 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" event={"ID":"4352d42d-6f43-4899-95e8-cd45c91c2a6e","Type":"ContainerStarted","Data":"f5fe3b6f28b25300795baec894743411009b7ba0d0a7b60c2431502f8f198643"} Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.354911 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" event={"ID":"220f50f1-8337-455d-b973-24e9d7b1917c","Type":"ContainerStarted","Data":"170ea35065a3a7a0019d371269ebeffdbd2f8bc3debdb53c930db1f18979556f"} Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.354943 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" event={"ID":"220f50f1-8337-455d-b973-24e9d7b1917c","Type":"ContainerStarted","Data":"1764b4a5fe73bb174a66b55dee964f038e15b919af09c927a419a11c8e66a3d1"} Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.358545 4875 generic.go:334] "Generic (PLEG): container finished" podID="867bde8d-d540-459b-a0ee-90ee2eb735ef" containerID="2f168edfe8f497869c8ece3794c204f8ca2b5d436ea2a53f3abeea43a6e38ab5" exitCode=0 Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.358767 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcdwk" event={"ID":"867bde8d-d540-459b-a0ee-90ee2eb735ef","Type":"ContainerDied","Data":"2f168edfe8f497869c8ece3794c204f8ca2b5d436ea2a53f3abeea43a6e38ab5"} Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.372555 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"36db59a3-da9e-4ad8-a2f6-abf638ec7e91","Type":"ContainerStarted","Data":"babbc9f173a5184992a39fd50df7912d796d8b5cb9807e1f7d209c1d5a11083d"} Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.408480 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" podStartSLOduration=1.4084595549999999 podStartE2EDuration="1.408459555s" podCreationTimestamp="2026-01-30 17:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:09.40427511 +0000 UTC m=+2019.951638493" watchObservedRunningTime="2026-01-30 17:30:09.408459555 +0000 UTC m=+2019.955822948" Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.419851 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" podStartSLOduration=1.419835928 podStartE2EDuration="1.419835928s" podCreationTimestamp="2026-01-30 17:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:09.418059391 +0000 UTC m=+2019.965422774" watchObservedRunningTime="2026-01-30 17:30:09.419835928 +0000 UTC m=+2019.967199311" Jan 30 17:30:09 crc kubenswrapper[4875]: I0130 17:30:09.441722 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=4.441700525 podStartE2EDuration="4.441700525s" podCreationTimestamp="2026-01-30 17:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:09.434521746 +0000 UTC m=+2019.981885149" watchObservedRunningTime="2026-01-30 17:30:09.441700525 +0000 UTC m=+2019.989063908" Jan 30 17:30:10 crc kubenswrapper[4875]: I0130 17:30:10.381979 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq4ms" event={"ID":"f0edba1d-9578-4bed-abfa-c6625e8f942a","Type":"ContainerStarted","Data":"3cfa334be804f7d87d4b514656e2141531ffb7dc0bf3d216dbff6d472dd112ab"} Jan 30 17:30:10 crc kubenswrapper[4875]: I0130 17:30:10.384060 4875 generic.go:334] "Generic (PLEG): container finished" podID="867bde8d-d540-459b-a0ee-90ee2eb735ef" containerID="e112f052d50fbb1173afad1f4522f79892163468669c5b5a32c12c00f43cd584" exitCode=0 Jan 30 17:30:10 crc kubenswrapper[4875]: I0130 17:30:10.384159 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcdwk" event={"ID":"867bde8d-d540-459b-a0ee-90ee2eb735ef","Type":"ContainerDied","Data":"e112f052d50fbb1173afad1f4522f79892163468669c5b5a32c12c00f43cd584"} Jan 30 17:30:10 crc kubenswrapper[4875]: I0130 17:30:10.400989 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dq4ms" podStartSLOduration=2.902389355 podStartE2EDuration="5.400971189s" podCreationTimestamp="2026-01-30 17:30:05 +0000 UTC" firstStartedPulling="2026-01-30 17:30:07.268505912 +0000 UTC m=+2017.815869295" lastFinishedPulling="2026-01-30 17:30:09.767087746 +0000 UTC m=+2020.314451129" observedRunningTime="2026-01-30 17:30:10.397785698 +0000 UTC m=+2020.945149081" watchObservedRunningTime="2026-01-30 17:30:10.400971189 +0000 UTC m=+2020.948334572" Jan 30 17:30:10 crc kubenswrapper[4875]: I0130 17:30:10.894815 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wv8ks"] Jan 30 17:30:10 crc kubenswrapper[4875]: I0130 17:30:10.896425 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:10 crc kubenswrapper[4875]: I0130 17:30:10.909135 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wv8ks"] Jan 30 17:30:10 crc kubenswrapper[4875]: I0130 17:30:10.974681 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731abbcd-7cd7-49f8-baf9-ef35c4e00897-catalog-content\") pod \"redhat-operators-wv8ks\" (UID: \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\") " pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:10 crc kubenswrapper[4875]: I0130 17:30:10.974755 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731abbcd-7cd7-49f8-baf9-ef35c4e00897-utilities\") pod \"redhat-operators-wv8ks\" (UID: \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\") " pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:10 crc kubenswrapper[4875]: I0130 17:30:10.974801 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hrxv\" (UniqueName: \"kubernetes.io/projected/731abbcd-7cd7-49f8-baf9-ef35c4e00897-kube-api-access-6hrxv\") pod \"redhat-operators-wv8ks\" (UID: \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\") " pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:11 crc kubenswrapper[4875]: I0130 17:30:11.076730 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hrxv\" (UniqueName: \"kubernetes.io/projected/731abbcd-7cd7-49f8-baf9-ef35c4e00897-kube-api-access-6hrxv\") pod \"redhat-operators-wv8ks\" (UID: \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\") " pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:11 crc kubenswrapper[4875]: I0130 17:30:11.079902 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731abbcd-7cd7-49f8-baf9-ef35c4e00897-catalog-content\") pod \"redhat-operators-wv8ks\" (UID: \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\") " pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:11 crc kubenswrapper[4875]: I0130 17:30:11.080137 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731abbcd-7cd7-49f8-baf9-ef35c4e00897-utilities\") pod \"redhat-operators-wv8ks\" (UID: \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\") " pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:11 crc kubenswrapper[4875]: I0130 17:30:11.080839 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731abbcd-7cd7-49f8-baf9-ef35c4e00897-utilities\") pod \"redhat-operators-wv8ks\" (UID: \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\") " pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:11 crc kubenswrapper[4875]: I0130 17:30:11.082123 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731abbcd-7cd7-49f8-baf9-ef35c4e00897-catalog-content\") pod \"redhat-operators-wv8ks\" (UID: \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\") " pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:11 crc kubenswrapper[4875]: I0130 17:30:11.108906 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hrxv\" (UniqueName: \"kubernetes.io/projected/731abbcd-7cd7-49f8-baf9-ef35c4e00897-kube-api-access-6hrxv\") pod \"redhat-operators-wv8ks\" (UID: \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\") " pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:11 crc kubenswrapper[4875]: I0130 17:30:11.172610 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:11 crc kubenswrapper[4875]: I0130 17:30:11.172685 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:11 crc kubenswrapper[4875]: I0130 17:30:11.186090 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:11 crc kubenswrapper[4875]: I0130 17:30:11.228138 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:11 crc kubenswrapper[4875]: I0130 17:30:11.405318 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcdwk" event={"ID":"867bde8d-d540-459b-a0ee-90ee2eb735ef","Type":"ContainerStarted","Data":"8e73d21d38adc4ad54cac8ecf7cbfd6282a3ffc5d418b6271fab5bf5f94cc18e"} Jan 30 17:30:11 crc kubenswrapper[4875]: I0130 17:30:11.447159 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kcdwk" podStartSLOduration=2.906765884 podStartE2EDuration="4.447140836s" podCreationTimestamp="2026-01-30 17:30:07 +0000 UTC" firstStartedPulling="2026-01-30 17:30:09.361878959 +0000 UTC m=+2019.909242342" lastFinishedPulling="2026-01-30 17:30:10.902253911 +0000 UTC m=+2021.449617294" observedRunningTime="2026-01-30 17:30:11.437856139 +0000 UTC m=+2021.985219522" watchObservedRunningTime="2026-01-30 17:30:11.447140836 +0000 UTC m=+2021.994504209" Jan 30 17:30:11 crc kubenswrapper[4875]: I0130 17:30:11.699160 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wv8ks"] Jan 30 17:30:12 crc kubenswrapper[4875]: I0130 17:30:12.416149 4875 generic.go:334] "Generic (PLEG): container finished" podID="5210e64b-5ccb-46aa-9797-f42f13d13eab" containerID="767d982e7af4d83c650086b701a2fa9f9a5089fc861ca8cd8afe522f243d9970" exitCode=0 Jan 30 17:30:12 crc kubenswrapper[4875]: I0130 17:30:12.416245 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" event={"ID":"5210e64b-5ccb-46aa-9797-f42f13d13eab","Type":"ContainerDied","Data":"767d982e7af4d83c650086b701a2fa9f9a5089fc861ca8cd8afe522f243d9970"} Jan 30 17:30:12 crc kubenswrapper[4875]: I0130 17:30:12.418710 4875 generic.go:334] "Generic (PLEG): container finished" podID="731abbcd-7cd7-49f8-baf9-ef35c4e00897" containerID="528bea37e39e1e8d1caf3a742fcfd949e83bcae3fa7449b5775c3adb61ab074e" exitCode=0 Jan 30 17:30:12 crc kubenswrapper[4875]: I0130 17:30:12.418863 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wv8ks" event={"ID":"731abbcd-7cd7-49f8-baf9-ef35c4e00897","Type":"ContainerDied","Data":"528bea37e39e1e8d1caf3a742fcfd949e83bcae3fa7449b5775c3adb61ab074e"} Jan 30 17:30:12 crc kubenswrapper[4875]: I0130 17:30:12.418948 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wv8ks" event={"ID":"731abbcd-7cd7-49f8-baf9-ef35c4e00897","Type":"ContainerStarted","Data":"edad9f498dc8827855599694447676d4490d09077b185ac3db7cac77d4c6fe07"} Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.426486 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wv8ks" event={"ID":"731abbcd-7cd7-49f8-baf9-ef35c4e00897","Type":"ContainerStarted","Data":"0c974deeedacc9a90126815645dae49aae930cc6b53e761e8c834673e7c4efd4"} Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.427889 4875 generic.go:334] "Generic (PLEG): container finished" podID="4352d42d-6f43-4899-95e8-cd45c91c2a6e" containerID="1cb971fcc00cf431468aae3ec808d9c56c04576350576b1f528a7cf2de6a0059" exitCode=255 Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.427957 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" event={"ID":"4352d42d-6f43-4899-95e8-cd45c91c2a6e","Type":"ContainerDied","Data":"1cb971fcc00cf431468aae3ec808d9c56c04576350576b1f528a7cf2de6a0059"} Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.428629 4875 scope.go:117] "RemoveContainer" containerID="1cb971fcc00cf431468aae3ec808d9c56c04576350576b1f528a7cf2de6a0059" Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.793291 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.830495 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5210e64b-5ccb-46aa-9797-f42f13d13eab-config-data\") pod \"5210e64b-5ccb-46aa-9797-f42f13d13eab\" (UID: \"5210e64b-5ccb-46aa-9797-f42f13d13eab\") " Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.830633 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfxwx\" (UniqueName: \"kubernetes.io/projected/5210e64b-5ccb-46aa-9797-f42f13d13eab-kube-api-access-rfxwx\") pod \"5210e64b-5ccb-46aa-9797-f42f13d13eab\" (UID: \"5210e64b-5ccb-46aa-9797-f42f13d13eab\") " Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.830704 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5210e64b-5ccb-46aa-9797-f42f13d13eab-scripts\") pod \"5210e64b-5ccb-46aa-9797-f42f13d13eab\" (UID: \"5210e64b-5ccb-46aa-9797-f42f13d13eab\") " Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.841478 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5210e64b-5ccb-46aa-9797-f42f13d13eab-kube-api-access-rfxwx" (OuterVolumeSpecName: "kube-api-access-rfxwx") pod "5210e64b-5ccb-46aa-9797-f42f13d13eab" (UID: "5210e64b-5ccb-46aa-9797-f42f13d13eab"). InnerVolumeSpecName "kube-api-access-rfxwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.841778 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5210e64b-5ccb-46aa-9797-f42f13d13eab-scripts" (OuterVolumeSpecName: "scripts") pod "5210e64b-5ccb-46aa-9797-f42f13d13eab" (UID: "5210e64b-5ccb-46aa-9797-f42f13d13eab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.865011 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5210e64b-5ccb-46aa-9797-f42f13d13eab-config-data" (OuterVolumeSpecName: "config-data") pod "5210e64b-5ccb-46aa-9797-f42f13d13eab" (UID: "5210e64b-5ccb-46aa-9797-f42f13d13eab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.933058 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5210e64b-5ccb-46aa-9797-f42f13d13eab-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.933101 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfxwx\" (UniqueName: \"kubernetes.io/projected/5210e64b-5ccb-46aa-9797-f42f13d13eab-kube-api-access-rfxwx\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:13 crc kubenswrapper[4875]: I0130 17:30:13.933116 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5210e64b-5ccb-46aa-9797-f42f13d13eab-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:14 crc kubenswrapper[4875]: I0130 17:30:14.437148 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" event={"ID":"5210e64b-5ccb-46aa-9797-f42f13d13eab","Type":"ContainerDied","Data":"1a9ad5c62a6bb7a2f51ae83647370974d95a1b9dde871041ed068b7daaf2cb1c"} Jan 30 17:30:14 crc kubenswrapper[4875]: I0130 17:30:14.437190 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a9ad5c62a6bb7a2f51ae83647370974d95a1b9dde871041ed068b7daaf2cb1c" Jan 30 17:30:14 crc kubenswrapper[4875]: I0130 17:30:14.437255 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c" Jan 30 17:30:14 crc kubenswrapper[4875]: I0130 17:30:14.441469 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" event={"ID":"4352d42d-6f43-4899-95e8-cd45c91c2a6e","Type":"ContainerStarted","Data":"05b9c97ca737bffb2545d9a93b1e016613c7d56eda5749303cecae85e50b42aa"} Jan 30 17:30:14 crc kubenswrapper[4875]: I0130 17:30:14.611331 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:14 crc kubenswrapper[4875]: I0130 17:30:14.611599 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="36db59a3-da9e-4ad8-a2f6-abf638ec7e91" containerName="nova-kuttl-api-log" containerID="cri-o://7a0eaa753994c54943c3e02c7101df072d4dc78aed6aa4a15362bfd5ab7f8196" gracePeriod=30 Jan 30 17:30:14 crc kubenswrapper[4875]: I0130 17:30:14.612014 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="36db59a3-da9e-4ad8-a2f6-abf638ec7e91" containerName="nova-kuttl-api-api" containerID="cri-o://babbc9f173a5184992a39fd50df7912d796d8b5cb9807e1f7d209c1d5a11083d" gracePeriod=30 Jan 30 17:30:14 crc kubenswrapper[4875]: I0130 17:30:14.634260 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:30:14 crc kubenswrapper[4875]: I0130 17:30:14.634473 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="61de0af0-81c4-4301-93e5-834b87113ae6" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://54a322846ffdc6834ae292967a2249fcff80a7b6592d4e69e6e194caa8cc68c5" gracePeriod=30 Jan 30 17:30:14 crc kubenswrapper[4875]: I0130 17:30:14.703795 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:14 crc kubenswrapper[4875]: I0130 17:30:14.704454 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="057a8d5b-be16-42b8-99fe-5ec8eee230ed" containerName="nova-kuttl-metadata-log" containerID="cri-o://43d4060f287b66a3a82f4a475cd54d01e9fcd295c2f1afb79b4baf9a869c9c31" gracePeriod=30 Jan 30 17:30:14 crc kubenswrapper[4875]: I0130 17:30:14.704830 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="057a8d5b-be16-42b8-99fe-5ec8eee230ed" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://6d27ae8cc2db33d75d5fbcd529842ce3cf41f43b0ec31b4bfdfdc02d8d56150d" gracePeriod=30 Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.451100 4875 generic.go:334] "Generic (PLEG): container finished" podID="220f50f1-8337-455d-b973-24e9d7b1917c" containerID="170ea35065a3a7a0019d371269ebeffdbd2f8bc3debdb53c930db1f18979556f" exitCode=0 Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.451169 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" event={"ID":"220f50f1-8337-455d-b973-24e9d7b1917c","Type":"ContainerDied","Data":"170ea35065a3a7a0019d371269ebeffdbd2f8bc3debdb53c930db1f18979556f"} Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.452754 4875 generic.go:334] "Generic (PLEG): container finished" podID="731abbcd-7cd7-49f8-baf9-ef35c4e00897" containerID="0c974deeedacc9a90126815645dae49aae930cc6b53e761e8c834673e7c4efd4" exitCode=0 Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.452799 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wv8ks" event={"ID":"731abbcd-7cd7-49f8-baf9-ef35c4e00897","Type":"ContainerDied","Data":"0c974deeedacc9a90126815645dae49aae930cc6b53e761e8c834673e7c4efd4"} Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.455787 4875 generic.go:334] "Generic (PLEG): container finished" podID="057a8d5b-be16-42b8-99fe-5ec8eee230ed" containerID="6d27ae8cc2db33d75d5fbcd529842ce3cf41f43b0ec31b4bfdfdc02d8d56150d" exitCode=0 Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.455812 4875 generic.go:334] "Generic (PLEG): container finished" podID="057a8d5b-be16-42b8-99fe-5ec8eee230ed" containerID="43d4060f287b66a3a82f4a475cd54d01e9fcd295c2f1afb79b4baf9a869c9c31" exitCode=143 Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.455844 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"057a8d5b-be16-42b8-99fe-5ec8eee230ed","Type":"ContainerDied","Data":"6d27ae8cc2db33d75d5fbcd529842ce3cf41f43b0ec31b4bfdfdc02d8d56150d"} Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.455866 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"057a8d5b-be16-42b8-99fe-5ec8eee230ed","Type":"ContainerDied","Data":"43d4060f287b66a3a82f4a475cd54d01e9fcd295c2f1afb79b4baf9a869c9c31"} Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.491497 4875 generic.go:334] "Generic (PLEG): container finished" podID="36db59a3-da9e-4ad8-a2f6-abf638ec7e91" containerID="babbc9f173a5184992a39fd50df7912d796d8b5cb9807e1f7d209c1d5a11083d" exitCode=0 Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.491526 4875 generic.go:334] "Generic (PLEG): container finished" podID="36db59a3-da9e-4ad8-a2f6-abf638ec7e91" containerID="7a0eaa753994c54943c3e02c7101df072d4dc78aed6aa4a15362bfd5ab7f8196" exitCode=143 Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.491549 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"36db59a3-da9e-4ad8-a2f6-abf638ec7e91","Type":"ContainerDied","Data":"babbc9f173a5184992a39fd50df7912d796d8b5cb9807e1f7d209c1d5a11083d"} Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.491573 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"36db59a3-da9e-4ad8-a2f6-abf638ec7e91","Type":"ContainerDied","Data":"7a0eaa753994c54943c3e02c7101df072d4dc78aed6aa4a15362bfd5ab7f8196"} Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.569410 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.658276 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-config-data\") pod \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\" (UID: \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\") " Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.658372 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tkg5\" (UniqueName: \"kubernetes.io/projected/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-kube-api-access-7tkg5\") pod \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\" (UID: \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\") " Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.658522 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-logs\") pod \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\" (UID: \"36db59a3-da9e-4ad8-a2f6-abf638ec7e91\") " Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.659162 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-logs" (OuterVolumeSpecName: "logs") pod "36db59a3-da9e-4ad8-a2f6-abf638ec7e91" (UID: "36db59a3-da9e-4ad8-a2f6-abf638ec7e91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.664602 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-kube-api-access-7tkg5" (OuterVolumeSpecName: "kube-api-access-7tkg5") pod "36db59a3-da9e-4ad8-a2f6-abf638ec7e91" (UID: "36db59a3-da9e-4ad8-a2f6-abf638ec7e91"). InnerVolumeSpecName "kube-api-access-7tkg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.681029 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-config-data" (OuterVolumeSpecName: "config-data") pod "36db59a3-da9e-4ad8-a2f6-abf638ec7e91" (UID: "36db59a3-da9e-4ad8-a2f6-abf638ec7e91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.700101 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.760286 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057a8d5b-be16-42b8-99fe-5ec8eee230ed-config-data\") pod \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\" (UID: \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\") " Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.760354 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lgzl\" (UniqueName: \"kubernetes.io/projected/057a8d5b-be16-42b8-99fe-5ec8eee230ed-kube-api-access-8lgzl\") pod \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\" (UID: \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\") " Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.760394 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/057a8d5b-be16-42b8-99fe-5ec8eee230ed-logs\") pod \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\" (UID: \"057a8d5b-be16-42b8-99fe-5ec8eee230ed\") " Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.760697 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tkg5\" (UniqueName: \"kubernetes.io/projected/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-kube-api-access-7tkg5\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.760709 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.760720 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36db59a3-da9e-4ad8-a2f6-abf638ec7e91-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.761426 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/057a8d5b-be16-42b8-99fe-5ec8eee230ed-logs" (OuterVolumeSpecName: "logs") pod "057a8d5b-be16-42b8-99fe-5ec8eee230ed" (UID: "057a8d5b-be16-42b8-99fe-5ec8eee230ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.763858 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/057a8d5b-be16-42b8-99fe-5ec8eee230ed-kube-api-access-8lgzl" (OuterVolumeSpecName: "kube-api-access-8lgzl") pod "057a8d5b-be16-42b8-99fe-5ec8eee230ed" (UID: "057a8d5b-be16-42b8-99fe-5ec8eee230ed"). InnerVolumeSpecName "kube-api-access-8lgzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.781753 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/057a8d5b-be16-42b8-99fe-5ec8eee230ed-config-data" (OuterVolumeSpecName: "config-data") pod "057a8d5b-be16-42b8-99fe-5ec8eee230ed" (UID: "057a8d5b-be16-42b8-99fe-5ec8eee230ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.862032 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/057a8d5b-be16-42b8-99fe-5ec8eee230ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.862063 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lgzl\" (UniqueName: \"kubernetes.io/projected/057a8d5b-be16-42b8-99fe-5ec8eee230ed-kube-api-access-8lgzl\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:15 crc kubenswrapper[4875]: I0130 17:30:15.862074 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/057a8d5b-be16-42b8-99fe-5ec8eee230ed-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.227772 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.227826 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.298046 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.504794 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"057a8d5b-be16-42b8-99fe-5ec8eee230ed","Type":"ContainerDied","Data":"0d96ac92f7f7964f3f703d2f8ae9d3f63cf8af4d538b4790120b6536ce650686"} Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.504829 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.504866 4875 scope.go:117] "RemoveContainer" containerID="6d27ae8cc2db33d75d5fbcd529842ce3cf41f43b0ec31b4bfdfdc02d8d56150d" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.508689 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.509398 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"36db59a3-da9e-4ad8-a2f6-abf638ec7e91","Type":"ContainerDied","Data":"416c46b056abfd0dd904170032df23ab33cb3b6bd89f35c4ea78099e209d3a30"} Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.580135 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.624722 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.639656 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.650878 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.659358 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.677640 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:16 crc kubenswrapper[4875]: E0130 17:30:16.677980 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057a8d5b-be16-42b8-99fe-5ec8eee230ed" containerName="nova-kuttl-metadata-log" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.677996 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="057a8d5b-be16-42b8-99fe-5ec8eee230ed" containerName="nova-kuttl-metadata-log" Jan 30 17:30:16 crc kubenswrapper[4875]: E0130 17:30:16.678010 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36db59a3-da9e-4ad8-a2f6-abf638ec7e91" containerName="nova-kuttl-api-log" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.678016 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="36db59a3-da9e-4ad8-a2f6-abf638ec7e91" containerName="nova-kuttl-api-log" Jan 30 17:30:16 crc kubenswrapper[4875]: E0130 17:30:16.678029 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36db59a3-da9e-4ad8-a2f6-abf638ec7e91" containerName="nova-kuttl-api-api" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.678034 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="36db59a3-da9e-4ad8-a2f6-abf638ec7e91" containerName="nova-kuttl-api-api" Jan 30 17:30:16 crc kubenswrapper[4875]: E0130 17:30:16.678047 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057a8d5b-be16-42b8-99fe-5ec8eee230ed" containerName="nova-kuttl-metadata-metadata" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.678053 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="057a8d5b-be16-42b8-99fe-5ec8eee230ed" containerName="nova-kuttl-metadata-metadata" Jan 30 17:30:16 crc kubenswrapper[4875]: E0130 17:30:16.678065 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5210e64b-5ccb-46aa-9797-f42f13d13eab" containerName="nova-manage" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.678070 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5210e64b-5ccb-46aa-9797-f42f13d13eab" containerName="nova-manage" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.678216 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="36db59a3-da9e-4ad8-a2f6-abf638ec7e91" containerName="nova-kuttl-api-log" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.678231 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="36db59a3-da9e-4ad8-a2f6-abf638ec7e91" containerName="nova-kuttl-api-api" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.678241 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5210e64b-5ccb-46aa-9797-f42f13d13eab" containerName="nova-manage" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.678252 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="057a8d5b-be16-42b8-99fe-5ec8eee230ed" containerName="nova-kuttl-metadata-log" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.678260 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="057a8d5b-be16-42b8-99fe-5ec8eee230ed" containerName="nova-kuttl-metadata-metadata" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.712786 4875 scope.go:117] "RemoveContainer" containerID="43d4060f287b66a3a82f4a475cd54d01e9fcd295c2f1afb79b4baf9a869c9c31" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.712819 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.714949 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.735434 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.738609 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.745324 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.746196 4875 scope.go:117] "RemoveContainer" containerID="babbc9f173a5184992a39fd50df7912d796d8b5cb9807e1f7d209c1d5a11083d" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.762488 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.770506 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.777258 4875 scope.go:117] "RemoveContainer" containerID="7a0eaa753994c54943c3e02c7101df072d4dc78aed6aa4a15362bfd5ab7f8196" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.815487 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grdfz\" (UniqueName: \"kubernetes.io/projected/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-kube-api-access-grdfz\") pod \"nova-kuttl-api-0\" (UID: \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.815544 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53b7c1-7005-4fa9-a572-014045a35eeb-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8d53b7c1-7005-4fa9-a572-014045a35eeb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.815643 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d53b7c1-7005-4fa9-a572-014045a35eeb-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8d53b7c1-7005-4fa9-a572-014045a35eeb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.815727 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnfdd\" (UniqueName: \"kubernetes.io/projected/8d53b7c1-7005-4fa9-a572-014045a35eeb-kube-api-access-nnfdd\") pod \"nova-kuttl-metadata-0\" (UID: \"8d53b7c1-7005-4fa9-a572-014045a35eeb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.815744 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-logs\") pod \"nova-kuttl-api-0\" (UID: \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.815783 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-config-data\") pod \"nova-kuttl-api-0\" (UID: \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.880303 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.917383 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d53b7c1-7005-4fa9-a572-014045a35eeb-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8d53b7c1-7005-4fa9-a572-014045a35eeb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.917855 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnfdd\" (UniqueName: \"kubernetes.io/projected/8d53b7c1-7005-4fa9-a572-014045a35eeb-kube-api-access-nnfdd\") pod \"nova-kuttl-metadata-0\" (UID: \"8d53b7c1-7005-4fa9-a572-014045a35eeb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.917886 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-logs\") pod \"nova-kuttl-api-0\" (UID: \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.917928 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-config-data\") pod \"nova-kuttl-api-0\" (UID: \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.918013 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grdfz\" (UniqueName: \"kubernetes.io/projected/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-kube-api-access-grdfz\") pod \"nova-kuttl-api-0\" (UID: \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.918048 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53b7c1-7005-4fa9-a572-014045a35eeb-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8d53b7c1-7005-4fa9-a572-014045a35eeb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.918131 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d53b7c1-7005-4fa9-a572-014045a35eeb-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8d53b7c1-7005-4fa9-a572-014045a35eeb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.918415 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-logs\") pod \"nova-kuttl-api-0\" (UID: \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.925472 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53b7c1-7005-4fa9-a572-014045a35eeb-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8d53b7c1-7005-4fa9-a572-014045a35eeb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.926211 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-config-data\") pod \"nova-kuttl-api-0\" (UID: \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.936851 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnfdd\" (UniqueName: \"kubernetes.io/projected/8d53b7c1-7005-4fa9-a572-014045a35eeb-kube-api-access-nnfdd\") pod \"nova-kuttl-metadata-0\" (UID: \"8d53b7c1-7005-4fa9-a572-014045a35eeb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:16 crc kubenswrapper[4875]: I0130 17:30:16.939198 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grdfz\" (UniqueName: \"kubernetes.io/projected/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-kube-api-access-grdfz\") pod \"nova-kuttl-api-0\" (UID: \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.018784 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220f50f1-8337-455d-b973-24e9d7b1917c-config-data\") pod \"220f50f1-8337-455d-b973-24e9d7b1917c\" (UID: \"220f50f1-8337-455d-b973-24e9d7b1917c\") " Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.018884 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/220f50f1-8337-455d-b973-24e9d7b1917c-scripts\") pod \"220f50f1-8337-455d-b973-24e9d7b1917c\" (UID: \"220f50f1-8337-455d-b973-24e9d7b1917c\") " Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.019003 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlhks\" (UniqueName: \"kubernetes.io/projected/220f50f1-8337-455d-b973-24e9d7b1917c-kube-api-access-wlhks\") pod \"220f50f1-8337-455d-b973-24e9d7b1917c\" (UID: \"220f50f1-8337-455d-b973-24e9d7b1917c\") " Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.023074 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/220f50f1-8337-455d-b973-24e9d7b1917c-scripts" (OuterVolumeSpecName: "scripts") pod "220f50f1-8337-455d-b973-24e9d7b1917c" (UID: "220f50f1-8337-455d-b973-24e9d7b1917c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.024278 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/220f50f1-8337-455d-b973-24e9d7b1917c-kube-api-access-wlhks" (OuterVolumeSpecName: "kube-api-access-wlhks") pod "220f50f1-8337-455d-b973-24e9d7b1917c" (UID: "220f50f1-8337-455d-b973-24e9d7b1917c"). InnerVolumeSpecName "kube-api-access-wlhks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.040955 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.049716 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/220f50f1-8337-455d-b973-24e9d7b1917c-config-data" (OuterVolumeSpecName: "config-data") pod "220f50f1-8337-455d-b973-24e9d7b1917c" (UID: "220f50f1-8337-455d-b973-24e9d7b1917c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.062504 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.120853 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/220f50f1-8337-455d-b973-24e9d7b1917c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.120892 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlhks\" (UniqueName: \"kubernetes.io/projected/220f50f1-8337-455d-b973-24e9d7b1917c-kube-api-access-wlhks\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.120911 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/220f50f1-8337-455d-b973-24e9d7b1917c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.494270 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:17 crc kubenswrapper[4875]: W0130 17:30:17.497644 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef11b4be_d976_4a6c_9ac9_3ff6a721178e.slice/crio-27d97bddd2d5b5e81180d0ae8ba3cf80512cd475e27121651a86c8a48cdf6f73 WatchSource:0}: Error finding container 27d97bddd2d5b5e81180d0ae8ba3cf80512cd475e27121651a86c8a48cdf6f73: Status 404 returned error can't find the container with id 27d97bddd2d5b5e81180d0ae8ba3cf80512cd475e27121651a86c8a48cdf6f73 Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.522184 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"ef11b4be-d976-4a6c-9ac9-3ff6a721178e","Type":"ContainerStarted","Data":"27d97bddd2d5b5e81180d0ae8ba3cf80512cd475e27121651a86c8a48cdf6f73"} Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.525016 4875 generic.go:334] "Generic (PLEG): container finished" podID="4352d42d-6f43-4899-95e8-cd45c91c2a6e" containerID="05b9c97ca737bffb2545d9a93b1e016613c7d56eda5749303cecae85e50b42aa" exitCode=0 Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.525078 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" event={"ID":"4352d42d-6f43-4899-95e8-cd45c91c2a6e","Type":"ContainerDied","Data":"05b9c97ca737bffb2545d9a93b1e016613c7d56eda5749303cecae85e50b42aa"} Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.525112 4875 scope.go:117] "RemoveContainer" containerID="1cb971fcc00cf431468aae3ec808d9c56c04576350576b1f528a7cf2de6a0059" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.529233 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" event={"ID":"220f50f1-8337-455d-b973-24e9d7b1917c","Type":"ContainerDied","Data":"1764b4a5fe73bb174a66b55dee964f038e15b919af09c927a419a11c8e66a3d1"} Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.529267 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.529290 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1764b4a5fe73bb174a66b55dee964f038e15b919af09c927a419a11c8e66a3d1" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.535861 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wv8ks" event={"ID":"731abbcd-7cd7-49f8-baf9-ef35c4e00897","Type":"ContainerStarted","Data":"2d44a270df262d050a0939750e9e8ac31ec321211f0d713240397d2ed2e20bea"} Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.575366 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.582512 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wv8ks" podStartSLOduration=3.449242119 podStartE2EDuration="7.582492265s" podCreationTimestamp="2026-01-30 17:30:10 +0000 UTC" firstStartedPulling="2026-01-30 17:30:12.422158352 +0000 UTC m=+2022.969521745" lastFinishedPulling="2026-01-30 17:30:16.555408508 +0000 UTC m=+2027.102771891" observedRunningTime="2026-01-30 17:30:17.57008873 +0000 UTC m=+2028.117452123" watchObservedRunningTime="2026-01-30 17:30:17.582492265 +0000 UTC m=+2028.129855648" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.638707 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.638786 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.663900 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.704205 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:17 crc kubenswrapper[4875]: I0130 17:30:17.713081 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:18 crc kubenswrapper[4875]: I0130 17:30:18.148940 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="057a8d5b-be16-42b8-99fe-5ec8eee230ed" path="/var/lib/kubelet/pods/057a8d5b-be16-42b8-99fe-5ec8eee230ed/volumes" Jan 30 17:30:18 crc kubenswrapper[4875]: I0130 17:30:18.149773 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36db59a3-da9e-4ad8-a2f6-abf638ec7e91" path="/var/lib/kubelet/pods/36db59a3-da9e-4ad8-a2f6-abf638ec7e91/volumes" Jan 30 17:30:18 crc kubenswrapper[4875]: I0130 17:30:18.604263 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:18 crc kubenswrapper[4875]: W0130 17:30:18.946945 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d53b7c1_7005_4fa9_a572_014045a35eeb.slice/crio-e0bfcc084785c3004514e0fde54c2cd31ad9d5b97c1248e906092d5789487255 WatchSource:0}: Error finding container e0bfcc084785c3004514e0fde54c2cd31ad9d5b97c1248e906092d5789487255: Status 404 returned error can't find the container with id e0bfcc084785c3004514e0fde54c2cd31ad9d5b97c1248e906092d5789487255 Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.213855 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.355088 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4352d42d-6f43-4899-95e8-cd45c91c2a6e-config-data\") pod \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\" (UID: \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\") " Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.355152 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp7s2\" (UniqueName: \"kubernetes.io/projected/4352d42d-6f43-4899-95e8-cd45c91c2a6e-kube-api-access-bp7s2\") pod \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\" (UID: \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\") " Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.355179 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4352d42d-6f43-4899-95e8-cd45c91c2a6e-scripts\") pod \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\" (UID: \"4352d42d-6f43-4899-95e8-cd45c91c2a6e\") " Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.361625 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4352d42d-6f43-4899-95e8-cd45c91c2a6e-scripts" (OuterVolumeSpecName: "scripts") pod "4352d42d-6f43-4899-95e8-cd45c91c2a6e" (UID: "4352d42d-6f43-4899-95e8-cd45c91c2a6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.366771 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4352d42d-6f43-4899-95e8-cd45c91c2a6e-kube-api-access-bp7s2" (OuterVolumeSpecName: "kube-api-access-bp7s2") pod "4352d42d-6f43-4899-95e8-cd45c91c2a6e" (UID: "4352d42d-6f43-4899-95e8-cd45c91c2a6e"). InnerVolumeSpecName "kube-api-access-bp7s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.382305 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4352d42d-6f43-4899-95e8-cd45c91c2a6e-config-data" (OuterVolumeSpecName: "config-data") pod "4352d42d-6f43-4899-95e8-cd45c91c2a6e" (UID: "4352d42d-6f43-4899-95e8-cd45c91c2a6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.456883 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4352d42d-6f43-4899-95e8-cd45c91c2a6e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.456925 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp7s2\" (UniqueName: \"kubernetes.io/projected/4352d42d-6f43-4899-95e8-cd45c91c2a6e-kube-api-access-bp7s2\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.456940 4875 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4352d42d-6f43-4899-95e8-cd45c91c2a6e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.560092 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8d53b7c1-7005-4fa9-a572-014045a35eeb","Type":"ContainerStarted","Data":"e0bfcc084785c3004514e0fde54c2cd31ad9d5b97c1248e906092d5789487255"} Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.562360 4875 generic.go:334] "Generic (PLEG): container finished" podID="61de0af0-81c4-4301-93e5-834b87113ae6" containerID="54a322846ffdc6834ae292967a2249fcff80a7b6592d4e69e6e194caa8cc68c5" exitCode=0 Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.562407 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"61de0af0-81c4-4301-93e5-834b87113ae6","Type":"ContainerDied","Data":"54a322846ffdc6834ae292967a2249fcff80a7b6592d4e69e6e194caa8cc68c5"} Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.571987 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" event={"ID":"4352d42d-6f43-4899-95e8-cd45c91c2a6e","Type":"ContainerDied","Data":"f5fe3b6f28b25300795baec894743411009b7ba0d0a7b60c2431502f8f198643"} Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.572019 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq" Jan 30 17:30:19 crc kubenswrapper[4875]: I0130 17:30:19.572034 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5fe3b6f28b25300795baec894743411009b7ba0d0a7b60c2431502f8f198643" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.231495 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.387149 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwf92\" (UniqueName: \"kubernetes.io/projected/61de0af0-81c4-4301-93e5-834b87113ae6-kube-api-access-wwf92\") pod \"61de0af0-81c4-4301-93e5-834b87113ae6\" (UID: \"61de0af0-81c4-4301-93e5-834b87113ae6\") " Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.387372 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de0af0-81c4-4301-93e5-834b87113ae6-config-data\") pod \"61de0af0-81c4-4301-93e5-834b87113ae6\" (UID: \"61de0af0-81c4-4301-93e5-834b87113ae6\") " Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.395858 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61de0af0-81c4-4301-93e5-834b87113ae6-kube-api-access-wwf92" (OuterVolumeSpecName: "kube-api-access-wwf92") pod "61de0af0-81c4-4301-93e5-834b87113ae6" (UID: "61de0af0-81c4-4301-93e5-834b87113ae6"). InnerVolumeSpecName "kube-api-access-wwf92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.411128 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61de0af0-81c4-4301-93e5-834b87113ae6-config-data" (OuterVolumeSpecName: "config-data") pod "61de0af0-81c4-4301-93e5-834b87113ae6" (UID: "61de0af0-81c4-4301-93e5-834b87113ae6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.488937 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61de0af0-81c4-4301-93e5-834b87113ae6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.488976 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwf92\" (UniqueName: \"kubernetes.io/projected/61de0af0-81c4-4301-93e5-834b87113ae6-kube-api-access-wwf92\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.581292 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.581287 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"61de0af0-81c4-4301-93e5-834b87113ae6","Type":"ContainerDied","Data":"793d99064dbee442f3fd58cb6e0d48ac22ce951ccb537101103fe5bae5f65edd"} Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.581688 4875 scope.go:117] "RemoveContainer" containerID="54a322846ffdc6834ae292967a2249fcff80a7b6592d4e69e6e194caa8cc68c5" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.583459 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8d53b7c1-7005-4fa9-a572-014045a35eeb","Type":"ContainerStarted","Data":"884f5e8bf932c01b78b8c37f8c809b1b3ef4d29853d1d14255a043960ed8ea2f"} Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.583486 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8d53b7c1-7005-4fa9-a572-014045a35eeb","Type":"ContainerStarted","Data":"4a9b06c1920eb9f6b0afeaff492600caf230684eee65e16565995580e7c402b6"} Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.583522 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8d53b7c1-7005-4fa9-a572-014045a35eeb" containerName="nova-kuttl-metadata-log" containerID="cri-o://4a9b06c1920eb9f6b0afeaff492600caf230684eee65e16565995580e7c402b6" gracePeriod=30 Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.583566 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8d53b7c1-7005-4fa9-a572-014045a35eeb" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://884f5e8bf932c01b78b8c37f8c809b1b3ef4d29853d1d14255a043960ed8ea2f" gracePeriod=30 Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.588848 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="ef11b4be-d976-4a6c-9ac9-3ff6a721178e" containerName="nova-kuttl-api-log" containerID="cri-o://27f3c7efd09f3ca5d21fb30af5c3a004ef0d27754e4619fe5da781ec511cde48" gracePeriod=30 Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.588966 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="ef11b4be-d976-4a6c-9ac9-3ff6a721178e" containerName="nova-kuttl-api-api" containerID="cri-o://291d834e2f5775c8de5ae830ad68a0a4a7c86ffa808f77d0a333acd317b6f07c" gracePeriod=30 Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.589153 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"ef11b4be-d976-4a6c-9ac9-3ff6a721178e","Type":"ContainerStarted","Data":"291d834e2f5775c8de5ae830ad68a0a4a7c86ffa808f77d0a333acd317b6f07c"} Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.589187 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"ef11b4be-d976-4a6c-9ac9-3ff6a721178e","Type":"ContainerStarted","Data":"27f3c7efd09f3ca5d21fb30af5c3a004ef0d27754e4619fe5da781ec511cde48"} Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.609488 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=4.609471717 podStartE2EDuration="4.609471717s" podCreationTimestamp="2026-01-30 17:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:20.604831579 +0000 UTC m=+2031.152194972" watchObservedRunningTime="2026-01-30 17:30:20.609471717 +0000 UTC m=+2031.156835100" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.621520 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.626726 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.645274 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:30:20 crc kubenswrapper[4875]: E0130 17:30:20.645745 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4352d42d-6f43-4899-95e8-cd45c91c2a6e" containerName="nova-manage" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.645765 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="4352d42d-6f43-4899-95e8-cd45c91c2a6e" containerName="nova-manage" Jan 30 17:30:20 crc kubenswrapper[4875]: E0130 17:30:20.645780 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61de0af0-81c4-4301-93e5-834b87113ae6" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.645787 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="61de0af0-81c4-4301-93e5-834b87113ae6" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:30:20 crc kubenswrapper[4875]: E0130 17:30:20.645797 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4352d42d-6f43-4899-95e8-cd45c91c2a6e" containerName="nova-manage" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.645803 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="4352d42d-6f43-4899-95e8-cd45c91c2a6e" containerName="nova-manage" Jan 30 17:30:20 crc kubenswrapper[4875]: E0130 17:30:20.645824 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220f50f1-8337-455d-b973-24e9d7b1917c" containerName="nova-manage" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.645830 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="220f50f1-8337-455d-b973-24e9d7b1917c" containerName="nova-manage" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.645998 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="61de0af0-81c4-4301-93e5-834b87113ae6" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.646025 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="4352d42d-6f43-4899-95e8-cd45c91c2a6e" containerName="nova-manage" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.646039 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="220f50f1-8337-455d-b973-24e9d7b1917c" containerName="nova-manage" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.646048 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="4352d42d-6f43-4899-95e8-cd45c91c2a6e" containerName="nova-manage" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.646775 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.647281 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=4.647267682 podStartE2EDuration="4.647267682s" podCreationTimestamp="2026-01-30 17:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:20.639513145 +0000 UTC m=+2031.186876538" watchObservedRunningTime="2026-01-30 17:30:20.647267682 +0000 UTC m=+2031.194631065" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.649021 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.660475 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.793846 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28464b14-b0ae-497b-b209-4c5ee5b67b5a-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"28464b14-b0ae-497b-b209-4c5ee5b67b5a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.793950 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drvm9\" (UniqueName: \"kubernetes.io/projected/28464b14-b0ae-497b-b209-4c5ee5b67b5a-kube-api-access-drvm9\") pod \"nova-kuttl-scheduler-0\" (UID: \"28464b14-b0ae-497b-b209-4c5ee5b67b5a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.895177 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drvm9\" (UniqueName: \"kubernetes.io/projected/28464b14-b0ae-497b-b209-4c5ee5b67b5a-kube-api-access-drvm9\") pod \"nova-kuttl-scheduler-0\" (UID: \"28464b14-b0ae-497b-b209-4c5ee5b67b5a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.895481 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28464b14-b0ae-497b-b209-4c5ee5b67b5a-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"28464b14-b0ae-497b-b209-4c5ee5b67b5a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.899898 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28464b14-b0ae-497b-b209-4c5ee5b67b5a-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"28464b14-b0ae-497b-b209-4c5ee5b67b5a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.912491 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drvm9\" (UniqueName: \"kubernetes.io/projected/28464b14-b0ae-497b-b209-4c5ee5b67b5a-kube-api-access-drvm9\") pod \"nova-kuttl-scheduler-0\" (UID: \"28464b14-b0ae-497b-b209-4c5ee5b67b5a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:20 crc kubenswrapper[4875]: I0130 17:30:20.971161 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.229324 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.229934 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.288569 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcdwk"] Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.410524 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:30:21 crc kubenswrapper[4875]: W0130 17:30:21.413479 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28464b14_b0ae_497b_b209_4c5ee5b67b5a.slice/crio-af2eba5ba57eecd419df1ce4d404ca30bc70793ef2e470c047beb671a65b2bcf WatchSource:0}: Error finding container af2eba5ba57eecd419df1ce4d404ca30bc70793ef2e470c047beb671a65b2bcf: Status 404 returned error can't find the container with id af2eba5ba57eecd419df1ce4d404ca30bc70793ef2e470c047beb671a65b2bcf Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.596624 4875 generic.go:334] "Generic (PLEG): container finished" podID="ef11b4be-d976-4a6c-9ac9-3ff6a721178e" containerID="291d834e2f5775c8de5ae830ad68a0a4a7c86ffa808f77d0a333acd317b6f07c" exitCode=0 Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.596657 4875 generic.go:334] "Generic (PLEG): container finished" podID="ef11b4be-d976-4a6c-9ac9-3ff6a721178e" containerID="27f3c7efd09f3ca5d21fb30af5c3a004ef0d27754e4619fe5da781ec511cde48" exitCode=143 Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.596696 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"ef11b4be-d976-4a6c-9ac9-3ff6a721178e","Type":"ContainerDied","Data":"291d834e2f5775c8de5ae830ad68a0a4a7c86ffa808f77d0a333acd317b6f07c"} Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.596720 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"ef11b4be-d976-4a6c-9ac9-3ff6a721178e","Type":"ContainerDied","Data":"27f3c7efd09f3ca5d21fb30af5c3a004ef0d27754e4619fe5da781ec511cde48"} Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.598890 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"28464b14-b0ae-497b-b209-4c5ee5b67b5a","Type":"ContainerStarted","Data":"af2eba5ba57eecd419df1ce4d404ca30bc70793ef2e470c047beb671a65b2bcf"} Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.600572 4875 generic.go:334] "Generic (PLEG): container finished" podID="8d53b7c1-7005-4fa9-a572-014045a35eeb" containerID="884f5e8bf932c01b78b8c37f8c809b1b3ef4d29853d1d14255a043960ed8ea2f" exitCode=0 Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.600610 4875 generic.go:334] "Generic (PLEG): container finished" podID="8d53b7c1-7005-4fa9-a572-014045a35eeb" containerID="4a9b06c1920eb9f6b0afeaff492600caf230684eee65e16565995580e7c402b6" exitCode=143 Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.600807 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kcdwk" podUID="867bde8d-d540-459b-a0ee-90ee2eb735ef" containerName="registry-server" containerID="cri-o://8e73d21d38adc4ad54cac8ecf7cbfd6282a3ffc5d418b6271fab5bf5f94cc18e" gracePeriod=2 Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.601225 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8d53b7c1-7005-4fa9-a572-014045a35eeb","Type":"ContainerDied","Data":"884f5e8bf932c01b78b8c37f8c809b1b3ef4d29853d1d14255a043960ed8ea2f"} Jan 30 17:30:21 crc kubenswrapper[4875]: I0130 17:30:21.601294 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8d53b7c1-7005-4fa9-a572-014045a35eeb","Type":"ContainerDied","Data":"4a9b06c1920eb9f6b0afeaff492600caf230684eee65e16565995580e7c402b6"} Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.063275 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.063640 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.145963 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61de0af0-81c4-4301-93e5-834b87113ae6" path="/var/lib/kubelet/pods/61de0af0-81c4-4301-93e5-834b87113ae6/volumes" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.271361 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wv8ks" podUID="731abbcd-7cd7-49f8-baf9-ef35c4e00897" containerName="registry-server" probeResult="failure" output=< Jan 30 17:30:22 crc kubenswrapper[4875]: timeout: failed to connect service ":50051" within 1s Jan 30 17:30:22 crc kubenswrapper[4875]: > Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.612931 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"28464b14-b0ae-497b-b209-4c5ee5b67b5a","Type":"ContainerStarted","Data":"43e4c62e8c024c87de87eccd7fd974c1ef89f1f23d76ee40886efbbb8c3b00ba"} Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.614663 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8d53b7c1-7005-4fa9-a572-014045a35eeb","Type":"ContainerDied","Data":"e0bfcc084785c3004514e0fde54c2cd31ad9d5b97c1248e906092d5789487255"} Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.614706 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0bfcc084785c3004514e0fde54c2cd31ad9d5b97c1248e906092d5789487255" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.619381 4875 generic.go:334] "Generic (PLEG): container finished" podID="867bde8d-d540-459b-a0ee-90ee2eb735ef" containerID="8e73d21d38adc4ad54cac8ecf7cbfd6282a3ffc5d418b6271fab5bf5f94cc18e" exitCode=0 Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.619419 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcdwk" event={"ID":"867bde8d-d540-459b-a0ee-90ee2eb735ef","Type":"ContainerDied","Data":"8e73d21d38adc4ad54cac8ecf7cbfd6282a3ffc5d418b6271fab5bf5f94cc18e"} Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.633288 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.633273553 podStartE2EDuration="2.633273553s" podCreationTimestamp="2026-01-30 17:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:22.62846256 +0000 UTC m=+2033.175825943" watchObservedRunningTime="2026-01-30 17:30:22.633273553 +0000 UTC m=+2033.180636936" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.700164 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.711690 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.823720 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grdfz\" (UniqueName: \"kubernetes.io/projected/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-kube-api-access-grdfz\") pod \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\" (UID: \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\") " Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.824157 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53b7c1-7005-4fa9-a572-014045a35eeb-config-data\") pod \"8d53b7c1-7005-4fa9-a572-014045a35eeb\" (UID: \"8d53b7c1-7005-4fa9-a572-014045a35eeb\") " Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.824193 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d53b7c1-7005-4fa9-a572-014045a35eeb-logs\") pod \"8d53b7c1-7005-4fa9-a572-014045a35eeb\" (UID: \"8d53b7c1-7005-4fa9-a572-014045a35eeb\") " Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.824239 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-config-data\") pod \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\" (UID: \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\") " Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.824326 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnfdd\" (UniqueName: \"kubernetes.io/projected/8d53b7c1-7005-4fa9-a572-014045a35eeb-kube-api-access-nnfdd\") pod \"8d53b7c1-7005-4fa9-a572-014045a35eeb\" (UID: \"8d53b7c1-7005-4fa9-a572-014045a35eeb\") " Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.824389 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-logs\") pod \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\" (UID: \"ef11b4be-d976-4a6c-9ac9-3ff6a721178e\") " Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.824793 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d53b7c1-7005-4fa9-a572-014045a35eeb-logs" (OuterVolumeSpecName: "logs") pod "8d53b7c1-7005-4fa9-a572-014045a35eeb" (UID: "8d53b7c1-7005-4fa9-a572-014045a35eeb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.825070 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-logs" (OuterVolumeSpecName: "logs") pod "ef11b4be-d976-4a6c-9ac9-3ff6a721178e" (UID: "ef11b4be-d976-4a6c-9ac9-3ff6a721178e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.830099 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d53b7c1-7005-4fa9-a572-014045a35eeb-kube-api-access-nnfdd" (OuterVolumeSpecName: "kube-api-access-nnfdd") pod "8d53b7c1-7005-4fa9-a572-014045a35eeb" (UID: "8d53b7c1-7005-4fa9-a572-014045a35eeb"). InnerVolumeSpecName "kube-api-access-nnfdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.831145 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-kube-api-access-grdfz" (OuterVolumeSpecName: "kube-api-access-grdfz") pod "ef11b4be-d976-4a6c-9ac9-3ff6a721178e" (UID: "ef11b4be-d976-4a6c-9ac9-3ff6a721178e"). InnerVolumeSpecName "kube-api-access-grdfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.846330 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-config-data" (OuterVolumeSpecName: "config-data") pod "ef11b4be-d976-4a6c-9ac9-3ff6a721178e" (UID: "ef11b4be-d976-4a6c-9ac9-3ff6a721178e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.853230 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d53b7c1-7005-4fa9-a572-014045a35eeb-config-data" (OuterVolumeSpecName: "config-data") pod "8d53b7c1-7005-4fa9-a572-014045a35eeb" (UID: "8d53b7c1-7005-4fa9-a572-014045a35eeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.871448 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.926057 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x77q9\" (UniqueName: \"kubernetes.io/projected/867bde8d-d540-459b-a0ee-90ee2eb735ef-kube-api-access-x77q9\") pod \"867bde8d-d540-459b-a0ee-90ee2eb735ef\" (UID: \"867bde8d-d540-459b-a0ee-90ee2eb735ef\") " Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.926177 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867bde8d-d540-459b-a0ee-90ee2eb735ef-catalog-content\") pod \"867bde8d-d540-459b-a0ee-90ee2eb735ef\" (UID: \"867bde8d-d540-459b-a0ee-90ee2eb735ef\") " Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.926273 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867bde8d-d540-459b-a0ee-90ee2eb735ef-utilities\") pod \"867bde8d-d540-459b-a0ee-90ee2eb735ef\" (UID: \"867bde8d-d540-459b-a0ee-90ee2eb735ef\") " Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.926557 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53b7c1-7005-4fa9-a572-014045a35eeb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.926578 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d53b7c1-7005-4fa9-a572-014045a35eeb-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.926613 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.926626 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnfdd\" (UniqueName: \"kubernetes.io/projected/8d53b7c1-7005-4fa9-a572-014045a35eeb-kube-api-access-nnfdd\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.926645 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.926657 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grdfz\" (UniqueName: \"kubernetes.io/projected/ef11b4be-d976-4a6c-9ac9-3ff6a721178e-kube-api-access-grdfz\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.927268 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/867bde8d-d540-459b-a0ee-90ee2eb735ef-utilities" (OuterVolumeSpecName: "utilities") pod "867bde8d-d540-459b-a0ee-90ee2eb735ef" (UID: "867bde8d-d540-459b-a0ee-90ee2eb735ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.929486 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/867bde8d-d540-459b-a0ee-90ee2eb735ef-kube-api-access-x77q9" (OuterVolumeSpecName: "kube-api-access-x77q9") pod "867bde8d-d540-459b-a0ee-90ee2eb735ef" (UID: "867bde8d-d540-459b-a0ee-90ee2eb735ef"). InnerVolumeSpecName "kube-api-access-x77q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:22 crc kubenswrapper[4875]: I0130 17:30:22.965430 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/867bde8d-d540-459b-a0ee-90ee2eb735ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "867bde8d-d540-459b-a0ee-90ee2eb735ef" (UID: "867bde8d-d540-459b-a0ee-90ee2eb735ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.029174 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x77q9\" (UniqueName: \"kubernetes.io/projected/867bde8d-d540-459b-a0ee-90ee2eb735ef-kube-api-access-x77q9\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.029210 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/867bde8d-d540-459b-a0ee-90ee2eb735ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.029222 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/867bde8d-d540-459b-a0ee-90ee2eb735ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.629488 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"ef11b4be-d976-4a6c-9ac9-3ff6a721178e","Type":"ContainerDied","Data":"27d97bddd2d5b5e81180d0ae8ba3cf80512cd475e27121651a86c8a48cdf6f73"} Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.629572 4875 scope.go:117] "RemoveContainer" containerID="291d834e2f5775c8de5ae830ad68a0a4a7c86ffa808f77d0a333acd317b6f07c" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.629530 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.633216 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.633247 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kcdwk" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.644704 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kcdwk" event={"ID":"867bde8d-d540-459b-a0ee-90ee2eb735ef","Type":"ContainerDied","Data":"0f09e2dc8f6b9ee9bb23bb7c2c527972d47a1dc330f85c78025cba6e3a02ce6b"} Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.660332 4875 scope.go:117] "RemoveContainer" containerID="27f3c7efd09f3ca5d21fb30af5c3a004ef0d27754e4619fe5da781ec511cde48" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.698765 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.713406 4875 scope.go:117] "RemoveContainer" containerID="8e73d21d38adc4ad54cac8ecf7cbfd6282a3ffc5d418b6271fab5bf5f94cc18e" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.723388 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.723436 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:23 crc kubenswrapper[4875]: E0130 17:30:23.723740 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867bde8d-d540-459b-a0ee-90ee2eb735ef" containerName="extract-utilities" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.723752 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="867bde8d-d540-459b-a0ee-90ee2eb735ef" containerName="extract-utilities" Jan 30 17:30:23 crc kubenswrapper[4875]: E0130 17:30:23.723765 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867bde8d-d540-459b-a0ee-90ee2eb735ef" containerName="registry-server" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.723771 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="867bde8d-d540-459b-a0ee-90ee2eb735ef" containerName="registry-server" Jan 30 17:30:23 crc kubenswrapper[4875]: E0130 17:30:23.723778 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d53b7c1-7005-4fa9-a572-014045a35eeb" containerName="nova-kuttl-metadata-log" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.723783 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d53b7c1-7005-4fa9-a572-014045a35eeb" containerName="nova-kuttl-metadata-log" Jan 30 17:30:23 crc kubenswrapper[4875]: E0130 17:30:23.723799 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef11b4be-d976-4a6c-9ac9-3ff6a721178e" containerName="nova-kuttl-api-api" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.723805 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef11b4be-d976-4a6c-9ac9-3ff6a721178e" containerName="nova-kuttl-api-api" Jan 30 17:30:23 crc kubenswrapper[4875]: E0130 17:30:23.723816 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d53b7c1-7005-4fa9-a572-014045a35eeb" containerName="nova-kuttl-metadata-metadata" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.723822 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d53b7c1-7005-4fa9-a572-014045a35eeb" containerName="nova-kuttl-metadata-metadata" Jan 30 17:30:23 crc kubenswrapper[4875]: E0130 17:30:23.723830 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867bde8d-d540-459b-a0ee-90ee2eb735ef" containerName="extract-content" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.723835 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="867bde8d-d540-459b-a0ee-90ee2eb735ef" containerName="extract-content" Jan 30 17:30:23 crc kubenswrapper[4875]: E0130 17:30:23.723846 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef11b4be-d976-4a6c-9ac9-3ff6a721178e" containerName="nova-kuttl-api-log" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.723851 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef11b4be-d976-4a6c-9ac9-3ff6a721178e" containerName="nova-kuttl-api-log" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.723984 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef11b4be-d976-4a6c-9ac9-3ff6a721178e" containerName="nova-kuttl-api-log" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.724001 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d53b7c1-7005-4fa9-a572-014045a35eeb" containerName="nova-kuttl-metadata-log" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.724009 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d53b7c1-7005-4fa9-a572-014045a35eeb" containerName="nova-kuttl-metadata-metadata" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.724016 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef11b4be-d976-4a6c-9ac9-3ff6a721178e" containerName="nova-kuttl-api-api" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.724027 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="867bde8d-d540-459b-a0ee-90ee2eb735ef" containerName="registry-server" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.724928 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.728867 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.745242 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.759734 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.759818 4875 scope.go:117] "RemoveContainer" containerID="e112f052d50fbb1173afad1f4522f79892163468669c5b5a32c12c00f43cd584" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.772813 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.780379 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcdwk"] Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.788259 4875 scope.go:117] "RemoveContainer" containerID="2f168edfe8f497869c8ece3794c204f8ca2b5d436ea2a53f3abeea43a6e38ab5" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.794644 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kcdwk"] Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.797538 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.798784 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.800682 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.803772 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.842871 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52kxb\" (UniqueName: \"kubernetes.io/projected/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-kube-api-access-52kxb\") pod \"nova-kuttl-api-0\" (UID: \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.842950 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-config-data\") pod \"nova-kuttl-api-0\" (UID: \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.842978 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-logs\") pod \"nova-kuttl-api-0\" (UID: \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.843003 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b4178bb-44e0-4346-a26a-de1835e64c11-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5b4178bb-44e0-4346-a26a-de1835e64c11\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.843132 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr8kx\" (UniqueName: \"kubernetes.io/projected/5b4178bb-44e0-4346-a26a-de1835e64c11-kube-api-access-hr8kx\") pod \"nova-kuttl-metadata-0\" (UID: \"5b4178bb-44e0-4346-a26a-de1835e64c11\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.843188 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b4178bb-44e0-4346-a26a-de1835e64c11-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5b4178bb-44e0-4346-a26a-de1835e64c11\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.884300 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dq4ms"] Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.884547 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dq4ms" podUID="f0edba1d-9578-4bed-abfa-c6625e8f942a" containerName="registry-server" containerID="cri-o://3cfa334be804f7d87d4b514656e2141531ffb7dc0bf3d216dbff6d472dd112ab" gracePeriod=2 Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.944843 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52kxb\" (UniqueName: \"kubernetes.io/projected/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-kube-api-access-52kxb\") pod \"nova-kuttl-api-0\" (UID: \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.944908 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-config-data\") pod \"nova-kuttl-api-0\" (UID: \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.944952 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-logs\") pod \"nova-kuttl-api-0\" (UID: \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.944992 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b4178bb-44e0-4346-a26a-de1835e64c11-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5b4178bb-44e0-4346-a26a-de1835e64c11\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.945061 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr8kx\" (UniqueName: \"kubernetes.io/projected/5b4178bb-44e0-4346-a26a-de1835e64c11-kube-api-access-hr8kx\") pod \"nova-kuttl-metadata-0\" (UID: \"5b4178bb-44e0-4346-a26a-de1835e64c11\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.945102 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b4178bb-44e0-4346-a26a-de1835e64c11-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5b4178bb-44e0-4346-a26a-de1835e64c11\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.945535 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b4178bb-44e0-4346-a26a-de1835e64c11-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5b4178bb-44e0-4346-a26a-de1835e64c11\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.945540 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-logs\") pod \"nova-kuttl-api-0\" (UID: \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.950416 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b4178bb-44e0-4346-a26a-de1835e64c11-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5b4178bb-44e0-4346-a26a-de1835e64c11\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.950546 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-config-data\") pod \"nova-kuttl-api-0\" (UID: \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.964340 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr8kx\" (UniqueName: \"kubernetes.io/projected/5b4178bb-44e0-4346-a26a-de1835e64c11-kube-api-access-hr8kx\") pod \"nova-kuttl-metadata-0\" (UID: \"5b4178bb-44e0-4346-a26a-de1835e64c11\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:23 crc kubenswrapper[4875]: I0130 17:30:23.964417 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52kxb\" (UniqueName: \"kubernetes.io/projected/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-kube-api-access-52kxb\") pod \"nova-kuttl-api-0\" (UID: \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.043297 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.153793 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="867bde8d-d540-459b-a0ee-90ee2eb735ef" path="/var/lib/kubelet/pods/867bde8d-d540-459b-a0ee-90ee2eb735ef/volumes" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.154650 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d53b7c1-7005-4fa9-a572-014045a35eeb" path="/var/lib/kubelet/pods/8d53b7c1-7005-4fa9-a572-014045a35eeb/volumes" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.155643 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef11b4be-d976-4a6c-9ac9-3ff6a721178e" path="/var/lib/kubelet/pods/ef11b4be-d976-4a6c-9ac9-3ff6a721178e/volumes" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.157653 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.296852 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.362264 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk52t\" (UniqueName: \"kubernetes.io/projected/f0edba1d-9578-4bed-abfa-c6625e8f942a-kube-api-access-mk52t\") pod \"f0edba1d-9578-4bed-abfa-c6625e8f942a\" (UID: \"f0edba1d-9578-4bed-abfa-c6625e8f942a\") " Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.362357 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0edba1d-9578-4bed-abfa-c6625e8f942a-utilities\") pod \"f0edba1d-9578-4bed-abfa-c6625e8f942a\" (UID: \"f0edba1d-9578-4bed-abfa-c6625e8f942a\") " Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.362466 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0edba1d-9578-4bed-abfa-c6625e8f942a-catalog-content\") pod \"f0edba1d-9578-4bed-abfa-c6625e8f942a\" (UID: \"f0edba1d-9578-4bed-abfa-c6625e8f942a\") " Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.363887 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0edba1d-9578-4bed-abfa-c6625e8f942a-utilities" (OuterVolumeSpecName: "utilities") pod "f0edba1d-9578-4bed-abfa-c6625e8f942a" (UID: "f0edba1d-9578-4bed-abfa-c6625e8f942a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.367742 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0edba1d-9578-4bed-abfa-c6625e8f942a-kube-api-access-mk52t" (OuterVolumeSpecName: "kube-api-access-mk52t") pod "f0edba1d-9578-4bed-abfa-c6625e8f942a" (UID: "f0edba1d-9578-4bed-abfa-c6625e8f942a"). InnerVolumeSpecName "kube-api-access-mk52t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.432866 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0edba1d-9578-4bed-abfa-c6625e8f942a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0edba1d-9578-4bed-abfa-c6625e8f942a" (UID: "f0edba1d-9578-4bed-abfa-c6625e8f942a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.463969 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk52t\" (UniqueName: \"kubernetes.io/projected/f0edba1d-9578-4bed-abfa-c6625e8f942a-kube-api-access-mk52t\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.464028 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0edba1d-9578-4bed-abfa-c6625e8f942a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.464041 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0edba1d-9578-4bed-abfa-c6625e8f942a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.568170 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.644079 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"12ac901b-5928-4e55-9e2b-71f0bfaf70e7","Type":"ContainerStarted","Data":"857120e9c0dd0b3320bfd99210799269ffad420123f1dfd998f6ee6b952d27c4"} Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.646566 4875 generic.go:334] "Generic (PLEG): container finished" podID="f0edba1d-9578-4bed-abfa-c6625e8f942a" containerID="3cfa334be804f7d87d4b514656e2141531ffb7dc0bf3d216dbff6d472dd112ab" exitCode=0 Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.646616 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq4ms" event={"ID":"f0edba1d-9578-4bed-abfa-c6625e8f942a","Type":"ContainerDied","Data":"3cfa334be804f7d87d4b514656e2141531ffb7dc0bf3d216dbff6d472dd112ab"} Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.646666 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dq4ms" event={"ID":"f0edba1d-9578-4bed-abfa-c6625e8f942a","Type":"ContainerDied","Data":"a8bb062394b1dba8bcab80edb0ca7de61155a4e65a12f8aeec81befab862e9ff"} Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.646650 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dq4ms" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.646687 4875 scope.go:117] "RemoveContainer" containerID="3cfa334be804f7d87d4b514656e2141531ffb7dc0bf3d216dbff6d472dd112ab" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.660417 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.675547 4875 scope.go:117] "RemoveContainer" containerID="6db66f51136ea6fbc9298a3af669583a96ecbf74946867e23baea2c857d067c6" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.694830 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dq4ms"] Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.698917 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dq4ms"] Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.701711 4875 scope.go:117] "RemoveContainer" containerID="0b56ecf0f24c2095d9999b4495359d97ea676f34f6fe73f2ed7705f1c591733c" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.731671 4875 scope.go:117] "RemoveContainer" containerID="3cfa334be804f7d87d4b514656e2141531ffb7dc0bf3d216dbff6d472dd112ab" Jan 30 17:30:24 crc kubenswrapper[4875]: E0130 17:30:24.732035 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cfa334be804f7d87d4b514656e2141531ffb7dc0bf3d216dbff6d472dd112ab\": container with ID starting with 3cfa334be804f7d87d4b514656e2141531ffb7dc0bf3d216dbff6d472dd112ab not found: ID does not exist" containerID="3cfa334be804f7d87d4b514656e2141531ffb7dc0bf3d216dbff6d472dd112ab" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.732083 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cfa334be804f7d87d4b514656e2141531ffb7dc0bf3d216dbff6d472dd112ab"} err="failed to get container status \"3cfa334be804f7d87d4b514656e2141531ffb7dc0bf3d216dbff6d472dd112ab\": rpc error: code = NotFound desc = could not find container \"3cfa334be804f7d87d4b514656e2141531ffb7dc0bf3d216dbff6d472dd112ab\": container with ID starting with 3cfa334be804f7d87d4b514656e2141531ffb7dc0bf3d216dbff6d472dd112ab not found: ID does not exist" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.732114 4875 scope.go:117] "RemoveContainer" containerID="6db66f51136ea6fbc9298a3af669583a96ecbf74946867e23baea2c857d067c6" Jan 30 17:30:24 crc kubenswrapper[4875]: E0130 17:30:24.732551 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6db66f51136ea6fbc9298a3af669583a96ecbf74946867e23baea2c857d067c6\": container with ID starting with 6db66f51136ea6fbc9298a3af669583a96ecbf74946867e23baea2c857d067c6 not found: ID does not exist" containerID="6db66f51136ea6fbc9298a3af669583a96ecbf74946867e23baea2c857d067c6" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.732576 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6db66f51136ea6fbc9298a3af669583a96ecbf74946867e23baea2c857d067c6"} err="failed to get container status \"6db66f51136ea6fbc9298a3af669583a96ecbf74946867e23baea2c857d067c6\": rpc error: code = NotFound desc = could not find container \"6db66f51136ea6fbc9298a3af669583a96ecbf74946867e23baea2c857d067c6\": container with ID starting with 6db66f51136ea6fbc9298a3af669583a96ecbf74946867e23baea2c857d067c6 not found: ID does not exist" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.732616 4875 scope.go:117] "RemoveContainer" containerID="0b56ecf0f24c2095d9999b4495359d97ea676f34f6fe73f2ed7705f1c591733c" Jan 30 17:30:24 crc kubenswrapper[4875]: E0130 17:30:24.732992 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b56ecf0f24c2095d9999b4495359d97ea676f34f6fe73f2ed7705f1c591733c\": container with ID starting with 0b56ecf0f24c2095d9999b4495359d97ea676f34f6fe73f2ed7705f1c591733c not found: ID does not exist" containerID="0b56ecf0f24c2095d9999b4495359d97ea676f34f6fe73f2ed7705f1c591733c" Jan 30 17:30:24 crc kubenswrapper[4875]: I0130 17:30:24.733029 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b56ecf0f24c2095d9999b4495359d97ea676f34f6fe73f2ed7705f1c591733c"} err="failed to get container status \"0b56ecf0f24c2095d9999b4495359d97ea676f34f6fe73f2ed7705f1c591733c\": rpc error: code = NotFound desc = could not find container \"0b56ecf0f24c2095d9999b4495359d97ea676f34f6fe73f2ed7705f1c591733c\": container with ID starting with 0b56ecf0f24c2095d9999b4495359d97ea676f34f6fe73f2ed7705f1c591733c not found: ID does not exist" Jan 30 17:30:25 crc kubenswrapper[4875]: I0130 17:30:25.659794 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"12ac901b-5928-4e55-9e2b-71f0bfaf70e7","Type":"ContainerStarted","Data":"1c7f181b2b219648ffc216ac3b8ad3bf2d5e9832fbae03d8ae1a7ad6bfb19d43"} Jan 30 17:30:25 crc kubenswrapper[4875]: I0130 17:30:25.660142 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"12ac901b-5928-4e55-9e2b-71f0bfaf70e7","Type":"ContainerStarted","Data":"c81d6d2c87242dbd387c26d45a8a4a6d6c2a184b1775f3387d997e6ba9944d39"} Jan 30 17:30:25 crc kubenswrapper[4875]: I0130 17:30:25.662608 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5b4178bb-44e0-4346-a26a-de1835e64c11","Type":"ContainerStarted","Data":"32acfdf77c301f79b044ad2dc8e01ccddcea144d0c8e3cfdd4cbdcc4e03870e0"} Jan 30 17:30:25 crc kubenswrapper[4875]: I0130 17:30:25.662642 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5b4178bb-44e0-4346-a26a-de1835e64c11","Type":"ContainerStarted","Data":"5cab9fe7b3bab5032944f6c000616458aaca867775a7fc55b021104df998a0dc"} Jan 30 17:30:25 crc kubenswrapper[4875]: I0130 17:30:25.662655 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5b4178bb-44e0-4346-a26a-de1835e64c11","Type":"ContainerStarted","Data":"a139fd48faf53050fce36f35f4286365f4c34e9167fe1de39a867c4a158b87f1"} Jan 30 17:30:25 crc kubenswrapper[4875]: I0130 17:30:25.682610 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.682577997 podStartE2EDuration="2.682577997s" podCreationTimestamp="2026-01-30 17:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:25.679170908 +0000 UTC m=+2036.226534301" watchObservedRunningTime="2026-01-30 17:30:25.682577997 +0000 UTC m=+2036.229941380" Jan 30 17:30:25 crc kubenswrapper[4875]: I0130 17:30:25.700492 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.700475487 podStartE2EDuration="2.700475487s" podCreationTimestamp="2026-01-30 17:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:25.693433113 +0000 UTC m=+2036.240796496" watchObservedRunningTime="2026-01-30 17:30:25.700475487 +0000 UTC m=+2036.247838860" Jan 30 17:30:25 crc kubenswrapper[4875]: I0130 17:30:25.972168 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:26 crc kubenswrapper[4875]: I0130 17:30:26.147277 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0edba1d-9578-4bed-abfa-c6625e8f942a" path="/var/lib/kubelet/pods/f0edba1d-9578-4bed-abfa-c6625e8f942a/volumes" Jan 30 17:30:29 crc kubenswrapper[4875]: I0130 17:30:29.158652 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:29 crc kubenswrapper[4875]: I0130 17:30:29.159657 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:30 crc kubenswrapper[4875]: I0130 17:30:30.971751 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:30 crc kubenswrapper[4875]: I0130 17:30:30.997728 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:31 crc kubenswrapper[4875]: I0130 17:30:31.275447 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:31 crc kubenswrapper[4875]: I0130 17:30:31.320456 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:31 crc kubenswrapper[4875]: I0130 17:30:31.748173 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:30:32 crc kubenswrapper[4875]: I0130 17:30:32.759041 4875 scope.go:117] "RemoveContainer" containerID="0020f39d9d126bbe926efda0d8e2cc87d2f29b6a281791f35168a10723dc25d0" Jan 30 17:30:34 crc kubenswrapper[4875]: I0130 17:30:34.044557 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:34 crc kubenswrapper[4875]: I0130 17:30:34.044883 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:34 crc kubenswrapper[4875]: I0130 17:30:34.161259 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:34 crc kubenswrapper[4875]: I0130 17:30:34.161367 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:34 crc kubenswrapper[4875]: I0130 17:30:34.884627 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wv8ks"] Jan 30 17:30:34 crc kubenswrapper[4875]: I0130 17:30:34.884884 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wv8ks" podUID="731abbcd-7cd7-49f8-baf9-ef35c4e00897" containerName="registry-server" containerID="cri-o://2d44a270df262d050a0939750e9e8ac31ec321211f0d713240397d2ed2e20bea" gracePeriod=2 Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.127974 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="12ac901b-5928-4e55-9e2b-71f0bfaf70e7" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.128257 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="12ac901b-5928-4e55-9e2b-71f0bfaf70e7" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.243732 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.214:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.244272 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.214:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.349100 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.461708 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731abbcd-7cd7-49f8-baf9-ef35c4e00897-utilities\") pod \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\" (UID: \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\") " Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.461852 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hrxv\" (UniqueName: \"kubernetes.io/projected/731abbcd-7cd7-49f8-baf9-ef35c4e00897-kube-api-access-6hrxv\") pod \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\" (UID: \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\") " Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.461892 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731abbcd-7cd7-49f8-baf9-ef35c4e00897-catalog-content\") pod \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\" (UID: \"731abbcd-7cd7-49f8-baf9-ef35c4e00897\") " Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.464145 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/731abbcd-7cd7-49f8-baf9-ef35c4e00897-utilities" (OuterVolumeSpecName: "utilities") pod "731abbcd-7cd7-49f8-baf9-ef35c4e00897" (UID: "731abbcd-7cd7-49f8-baf9-ef35c4e00897"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.469198 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/731abbcd-7cd7-49f8-baf9-ef35c4e00897-kube-api-access-6hrxv" (OuterVolumeSpecName: "kube-api-access-6hrxv") pod "731abbcd-7cd7-49f8-baf9-ef35c4e00897" (UID: "731abbcd-7cd7-49f8-baf9-ef35c4e00897"). InnerVolumeSpecName "kube-api-access-6hrxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.563931 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/731abbcd-7cd7-49f8-baf9-ef35c4e00897-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.563969 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hrxv\" (UniqueName: \"kubernetes.io/projected/731abbcd-7cd7-49f8-baf9-ef35c4e00897-kube-api-access-6hrxv\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.626304 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/731abbcd-7cd7-49f8-baf9-ef35c4e00897-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "731abbcd-7cd7-49f8-baf9-ef35c4e00897" (UID: "731abbcd-7cd7-49f8-baf9-ef35c4e00897"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.664881 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/731abbcd-7cd7-49f8-baf9-ef35c4e00897-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.770647 4875 generic.go:334] "Generic (PLEG): container finished" podID="731abbcd-7cd7-49f8-baf9-ef35c4e00897" containerID="2d44a270df262d050a0939750e9e8ac31ec321211f0d713240397d2ed2e20bea" exitCode=0 Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.770692 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wv8ks" event={"ID":"731abbcd-7cd7-49f8-baf9-ef35c4e00897","Type":"ContainerDied","Data":"2d44a270df262d050a0939750e9e8ac31ec321211f0d713240397d2ed2e20bea"} Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.770759 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wv8ks" event={"ID":"731abbcd-7cd7-49f8-baf9-ef35c4e00897","Type":"ContainerDied","Data":"edad9f498dc8827855599694447676d4490d09077b185ac3db7cac77d4c6fe07"} Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.770777 4875 scope.go:117] "RemoveContainer" containerID="2d44a270df262d050a0939750e9e8ac31ec321211f0d713240397d2ed2e20bea" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.770777 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wv8ks" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.805126 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wv8ks"] Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.807875 4875 scope.go:117] "RemoveContainer" containerID="0c974deeedacc9a90126815645dae49aae930cc6b53e761e8c834673e7c4efd4" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.813234 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wv8ks"] Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.829001 4875 scope.go:117] "RemoveContainer" containerID="528bea37e39e1e8d1caf3a742fcfd949e83bcae3fa7449b5775c3adb61ab074e" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.870699 4875 scope.go:117] "RemoveContainer" containerID="2d44a270df262d050a0939750e9e8ac31ec321211f0d713240397d2ed2e20bea" Jan 30 17:30:35 crc kubenswrapper[4875]: E0130 17:30:35.871197 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d44a270df262d050a0939750e9e8ac31ec321211f0d713240397d2ed2e20bea\": container with ID starting with 2d44a270df262d050a0939750e9e8ac31ec321211f0d713240397d2ed2e20bea not found: ID does not exist" containerID="2d44a270df262d050a0939750e9e8ac31ec321211f0d713240397d2ed2e20bea" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.871241 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d44a270df262d050a0939750e9e8ac31ec321211f0d713240397d2ed2e20bea"} err="failed to get container status \"2d44a270df262d050a0939750e9e8ac31ec321211f0d713240397d2ed2e20bea\": rpc error: code = NotFound desc = could not find container \"2d44a270df262d050a0939750e9e8ac31ec321211f0d713240397d2ed2e20bea\": container with ID starting with 2d44a270df262d050a0939750e9e8ac31ec321211f0d713240397d2ed2e20bea not found: ID does not exist" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.871270 4875 scope.go:117] "RemoveContainer" containerID="0c974deeedacc9a90126815645dae49aae930cc6b53e761e8c834673e7c4efd4" Jan 30 17:30:35 crc kubenswrapper[4875]: E0130 17:30:35.873568 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c974deeedacc9a90126815645dae49aae930cc6b53e761e8c834673e7c4efd4\": container with ID starting with 0c974deeedacc9a90126815645dae49aae930cc6b53e761e8c834673e7c4efd4 not found: ID does not exist" containerID="0c974deeedacc9a90126815645dae49aae930cc6b53e761e8c834673e7c4efd4" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.873617 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c974deeedacc9a90126815645dae49aae930cc6b53e761e8c834673e7c4efd4"} err="failed to get container status \"0c974deeedacc9a90126815645dae49aae930cc6b53e761e8c834673e7c4efd4\": rpc error: code = NotFound desc = could not find container \"0c974deeedacc9a90126815645dae49aae930cc6b53e761e8c834673e7c4efd4\": container with ID starting with 0c974deeedacc9a90126815645dae49aae930cc6b53e761e8c834673e7c4efd4 not found: ID does not exist" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.873638 4875 scope.go:117] "RemoveContainer" containerID="528bea37e39e1e8d1caf3a742fcfd949e83bcae3fa7449b5775c3adb61ab074e" Jan 30 17:30:35 crc kubenswrapper[4875]: E0130 17:30:35.874059 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"528bea37e39e1e8d1caf3a742fcfd949e83bcae3fa7449b5775c3adb61ab074e\": container with ID starting with 528bea37e39e1e8d1caf3a742fcfd949e83bcae3fa7449b5775c3adb61ab074e not found: ID does not exist" containerID="528bea37e39e1e8d1caf3a742fcfd949e83bcae3fa7449b5775c3adb61ab074e" Jan 30 17:30:35 crc kubenswrapper[4875]: I0130 17:30:35.874091 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"528bea37e39e1e8d1caf3a742fcfd949e83bcae3fa7449b5775c3adb61ab074e"} err="failed to get container status \"528bea37e39e1e8d1caf3a742fcfd949e83bcae3fa7449b5775c3adb61ab074e\": rpc error: code = NotFound desc = could not find container \"528bea37e39e1e8d1caf3a742fcfd949e83bcae3fa7449b5775c3adb61ab074e\": container with ID starting with 528bea37e39e1e8d1caf3a742fcfd949e83bcae3fa7449b5775c3adb61ab074e not found: ID does not exist" Jan 30 17:30:36 crc kubenswrapper[4875]: I0130 17:30:36.150419 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="731abbcd-7cd7-49f8-baf9-ef35c4e00897" path="/var/lib/kubelet/pods/731abbcd-7cd7-49f8-baf9-ef35c4e00897/volumes" Jan 30 17:30:44 crc kubenswrapper[4875]: I0130 17:30:44.047609 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:44 crc kubenswrapper[4875]: I0130 17:30:44.048189 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:44 crc kubenswrapper[4875]: I0130 17:30:44.048510 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:44 crc kubenswrapper[4875]: I0130 17:30:44.048535 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:44 crc kubenswrapper[4875]: I0130 17:30:44.051944 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:44 crc kubenswrapper[4875]: I0130 17:30:44.052360 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:30:44 crc kubenswrapper[4875]: I0130 17:30:44.166185 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:44 crc kubenswrapper[4875]: I0130 17:30:44.168430 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:44 crc kubenswrapper[4875]: I0130 17:30:44.174744 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:30:44 crc kubenswrapper[4875]: I0130 17:30:44.845196 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:31:04 crc kubenswrapper[4875]: I0130 17:31:04.098449 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 30 17:31:04 crc kubenswrapper[4875]: I0130 17:31:04.100345 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bc5a12f2-88b7-4686-a4dd-f681febdbb09" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" containerID="cri-o://5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26" gracePeriod=30 Jan 30 17:31:04 crc kubenswrapper[4875]: I0130 17:31:04.107385 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:31:04 crc kubenswrapper[4875]: I0130 17:31:04.107656 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="e0b77110-37aa-4395-9028-e4c8bbad8515" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://fe5f432383d824e223eceb3c4c1c95d2cdf30bccbb3e20ab48339265253e476f" gracePeriod=30 Jan 30 17:31:04 crc kubenswrapper[4875]: I0130 17:31:04.212786 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:31:04 crc kubenswrapper[4875]: I0130 17:31:04.213078 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="28464b14-b0ae-497b-b209-4c5ee5b67b5a" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://43e4c62e8c024c87de87eccd7fd974c1ef89f1f23d76ee40886efbbb8c3b00ba" gracePeriod=30 Jan 30 17:31:04 crc kubenswrapper[4875]: I0130 17:31:04.223278 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:31:04 crc kubenswrapper[4875]: I0130 17:31:04.223784 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="12ac901b-5928-4e55-9e2b-71f0bfaf70e7" containerName="nova-kuttl-api-api" containerID="cri-o://1c7f181b2b219648ffc216ac3b8ad3bf2d5e9832fbae03d8ae1a7ad6bfb19d43" gracePeriod=30 Jan 30 17:31:04 crc kubenswrapper[4875]: I0130 17:31:04.223715 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="12ac901b-5928-4e55-9e2b-71f0bfaf70e7" containerName="nova-kuttl-api-log" containerID="cri-o://c81d6d2c87242dbd387c26d45a8a4a6d6c2a184b1775f3387d997e6ba9944d39" gracePeriod=30 Jan 30 17:31:05 crc kubenswrapper[4875]: I0130 17:31:05.017303 4875 generic.go:334] "Generic (PLEG): container finished" podID="12ac901b-5928-4e55-9e2b-71f0bfaf70e7" containerID="c81d6d2c87242dbd387c26d45a8a4a6d6c2a184b1775f3387d997e6ba9944d39" exitCode=143 Jan 30 17:31:05 crc kubenswrapper[4875]: I0130 17:31:05.017395 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"12ac901b-5928-4e55-9e2b-71f0bfaf70e7","Type":"ContainerDied","Data":"c81d6d2c87242dbd387c26d45a8a4a6d6c2a184b1775f3387d997e6ba9944d39"} Jan 30 17:31:05 crc kubenswrapper[4875]: E0130 17:31:05.122419 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fe5f432383d824e223eceb3c4c1c95d2cdf30bccbb3e20ab48339265253e476f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:31:05 crc kubenswrapper[4875]: E0130 17:31:05.123919 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fe5f432383d824e223eceb3c4c1c95d2cdf30bccbb3e20ab48339265253e476f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:31:05 crc kubenswrapper[4875]: E0130 17:31:05.125319 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fe5f432383d824e223eceb3c4c1c95d2cdf30bccbb3e20ab48339265253e476f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:31:05 crc kubenswrapper[4875]: E0130 17:31:05.125396 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="e0b77110-37aa-4395-9028-e4c8bbad8515" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:31:05 crc kubenswrapper[4875]: I0130 17:31:05.741968 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:05 crc kubenswrapper[4875]: I0130 17:31:05.940290 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28464b14-b0ae-497b-b209-4c5ee5b67b5a-config-data\") pod \"28464b14-b0ae-497b-b209-4c5ee5b67b5a\" (UID: \"28464b14-b0ae-497b-b209-4c5ee5b67b5a\") " Jan 30 17:31:05 crc kubenswrapper[4875]: I0130 17:31:05.940678 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drvm9\" (UniqueName: \"kubernetes.io/projected/28464b14-b0ae-497b-b209-4c5ee5b67b5a-kube-api-access-drvm9\") pod \"28464b14-b0ae-497b-b209-4c5ee5b67b5a\" (UID: \"28464b14-b0ae-497b-b209-4c5ee5b67b5a\") " Jan 30 17:31:05 crc kubenswrapper[4875]: I0130 17:31:05.946068 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28464b14-b0ae-497b-b209-4c5ee5b67b5a-kube-api-access-drvm9" (OuterVolumeSpecName: "kube-api-access-drvm9") pod "28464b14-b0ae-497b-b209-4c5ee5b67b5a" (UID: "28464b14-b0ae-497b-b209-4c5ee5b67b5a"). InnerVolumeSpecName "kube-api-access-drvm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:05 crc kubenswrapper[4875]: I0130 17:31:05.961833 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28464b14-b0ae-497b-b209-4c5ee5b67b5a-config-data" (OuterVolumeSpecName: "config-data") pod "28464b14-b0ae-497b-b209-4c5ee5b67b5a" (UID: "28464b14-b0ae-497b-b209-4c5ee5b67b5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.025701 4875 generic.go:334] "Generic (PLEG): container finished" podID="28464b14-b0ae-497b-b209-4c5ee5b67b5a" containerID="43e4c62e8c024c87de87eccd7fd974c1ef89f1f23d76ee40886efbbb8c3b00ba" exitCode=0 Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.025743 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.025741 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"28464b14-b0ae-497b-b209-4c5ee5b67b5a","Type":"ContainerDied","Data":"43e4c62e8c024c87de87eccd7fd974c1ef89f1f23d76ee40886efbbb8c3b00ba"} Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.025861 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"28464b14-b0ae-497b-b209-4c5ee5b67b5a","Type":"ContainerDied","Data":"af2eba5ba57eecd419df1ce4d404ca30bc70793ef2e470c047beb671a65b2bcf"} Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.025887 4875 scope.go:117] "RemoveContainer" containerID="43e4c62e8c024c87de87eccd7fd974c1ef89f1f23d76ee40886efbbb8c3b00ba" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.042220 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drvm9\" (UniqueName: \"kubernetes.io/projected/28464b14-b0ae-497b-b209-4c5ee5b67b5a-kube-api-access-drvm9\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.042250 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28464b14-b0ae-497b-b209-4c5ee5b67b5a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.062519 4875 scope.go:117] "RemoveContainer" containerID="43e4c62e8c024c87de87eccd7fd974c1ef89f1f23d76ee40886efbbb8c3b00ba" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.066389 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:31:06 crc kubenswrapper[4875]: E0130 17:31:06.067451 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e4c62e8c024c87de87eccd7fd974c1ef89f1f23d76ee40886efbbb8c3b00ba\": container with ID starting with 43e4c62e8c024c87de87eccd7fd974c1ef89f1f23d76ee40886efbbb8c3b00ba not found: ID does not exist" containerID="43e4c62e8c024c87de87eccd7fd974c1ef89f1f23d76ee40886efbbb8c3b00ba" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.067506 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e4c62e8c024c87de87eccd7fd974c1ef89f1f23d76ee40886efbbb8c3b00ba"} err="failed to get container status \"43e4c62e8c024c87de87eccd7fd974c1ef89f1f23d76ee40886efbbb8c3b00ba\": rpc error: code = NotFound desc = could not find container \"43e4c62e8c024c87de87eccd7fd974c1ef89f1f23d76ee40886efbbb8c3b00ba\": container with ID starting with 43e4c62e8c024c87de87eccd7fd974c1ef89f1f23d76ee40886efbbb8c3b00ba not found: ID does not exist" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.080200 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.092231 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:31:06 crc kubenswrapper[4875]: E0130 17:31:06.092655 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0edba1d-9578-4bed-abfa-c6625e8f942a" containerName="extract-utilities" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.092675 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0edba1d-9578-4bed-abfa-c6625e8f942a" containerName="extract-utilities" Jan 30 17:31:06 crc kubenswrapper[4875]: E0130 17:31:06.092690 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0edba1d-9578-4bed-abfa-c6625e8f942a" containerName="extract-content" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.092700 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0edba1d-9578-4bed-abfa-c6625e8f942a" containerName="extract-content" Jan 30 17:31:06 crc kubenswrapper[4875]: E0130 17:31:06.092722 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0edba1d-9578-4bed-abfa-c6625e8f942a" containerName="registry-server" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.092728 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0edba1d-9578-4bed-abfa-c6625e8f942a" containerName="registry-server" Jan 30 17:31:06 crc kubenswrapper[4875]: E0130 17:31:06.092738 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731abbcd-7cd7-49f8-baf9-ef35c4e00897" containerName="extract-content" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.092746 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="731abbcd-7cd7-49f8-baf9-ef35c4e00897" containerName="extract-content" Jan 30 17:31:06 crc kubenswrapper[4875]: E0130 17:31:06.092756 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28464b14-b0ae-497b-b209-4c5ee5b67b5a" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.092762 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="28464b14-b0ae-497b-b209-4c5ee5b67b5a" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:31:06 crc kubenswrapper[4875]: E0130 17:31:06.092769 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731abbcd-7cd7-49f8-baf9-ef35c4e00897" containerName="registry-server" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.092774 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="731abbcd-7cd7-49f8-baf9-ef35c4e00897" containerName="registry-server" Jan 30 17:31:06 crc kubenswrapper[4875]: E0130 17:31:06.092793 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731abbcd-7cd7-49f8-baf9-ef35c4e00897" containerName="extract-utilities" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.092801 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="731abbcd-7cd7-49f8-baf9-ef35c4e00897" containerName="extract-utilities" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.093000 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="731abbcd-7cd7-49f8-baf9-ef35c4e00897" containerName="registry-server" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.093023 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="28464b14-b0ae-497b-b209-4c5ee5b67b5a" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.093034 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0edba1d-9578-4bed-abfa-c6625e8f942a" containerName="registry-server" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.093909 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.104126 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.109624 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.145717 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e181f1bb-324d-4c85-849e-b6fc65dfc53f-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"e181f1bb-324d-4c85-849e-b6fc65dfc53f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.146485 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28464b14-b0ae-497b-b209-4c5ee5b67b5a" path="/var/lib/kubelet/pods/28464b14-b0ae-497b-b209-4c5ee5b67b5a/volumes" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.247218 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktbwx\" (UniqueName: \"kubernetes.io/projected/e181f1bb-324d-4c85-849e-b6fc65dfc53f-kube-api-access-ktbwx\") pod \"nova-kuttl-scheduler-0\" (UID: \"e181f1bb-324d-4c85-849e-b6fc65dfc53f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.247317 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e181f1bb-324d-4c85-849e-b6fc65dfc53f-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"e181f1bb-324d-4c85-849e-b6fc65dfc53f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.250463 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e181f1bb-324d-4c85-849e-b6fc65dfc53f-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"e181f1bb-324d-4c85-849e-b6fc65dfc53f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:06 crc kubenswrapper[4875]: E0130 17:31:06.293964 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:06 crc kubenswrapper[4875]: E0130 17:31:06.295414 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:06 crc kubenswrapper[4875]: E0130 17:31:06.297705 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:06 crc kubenswrapper[4875]: E0130 17:31:06.297780 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bc5a12f2-88b7-4686-a4dd-f681febdbb09" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.349528 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktbwx\" (UniqueName: \"kubernetes.io/projected/e181f1bb-324d-4c85-849e-b6fc65dfc53f-kube-api-access-ktbwx\") pod \"nova-kuttl-scheduler-0\" (UID: \"e181f1bb-324d-4c85-849e-b6fc65dfc53f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.366798 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktbwx\" (UniqueName: \"kubernetes.io/projected/e181f1bb-324d-4c85-849e-b6fc65dfc53f-kube-api-access-ktbwx\") pod \"nova-kuttl-scheduler-0\" (UID: \"e181f1bb-324d-4c85-849e-b6fc65dfc53f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.430891 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:06 crc kubenswrapper[4875]: W0130 17:31:06.915648 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode181f1bb_324d_4c85_849e_b6fc65dfc53f.slice/crio-c0fd7179c66db15bd9fbf93889c68968eedc9587f07caa65749604895be0f73a WatchSource:0}: Error finding container c0fd7179c66db15bd9fbf93889c68968eedc9587f07caa65749604895be0f73a: Status 404 returned error can't find the container with id c0fd7179c66db15bd9fbf93889c68968eedc9587f07caa65749604895be0f73a Jan 30 17:31:06 crc kubenswrapper[4875]: I0130 17:31:06.916568 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:31:07 crc kubenswrapper[4875]: I0130 17:31:07.043027 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e181f1bb-324d-4c85-849e-b6fc65dfc53f","Type":"ContainerStarted","Data":"c0fd7179c66db15bd9fbf93889c68968eedc9587f07caa65749604895be0f73a"} Jan 30 17:31:07 crc kubenswrapper[4875]: I0130 17:31:07.434550 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:31:07 crc kubenswrapper[4875]: I0130 17:31:07.434889 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="c8259d14-22c2-46fe-ae19-81afd949566d" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://7c793348685e3d30ed2d2f6e6f8ba817bd0518cbe0bb405782d1d5a46d91ac42" gracePeriod=30 Jan 30 17:31:07 crc kubenswrapper[4875]: E0130 17:31:07.612122 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7c793348685e3d30ed2d2f6e6f8ba817bd0518cbe0bb405782d1d5a46d91ac42" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:31:07 crc kubenswrapper[4875]: E0130 17:31:07.615669 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7c793348685e3d30ed2d2f6e6f8ba817bd0518cbe0bb405782d1d5a46d91ac42" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:31:07 crc kubenswrapper[4875]: E0130 17:31:07.617199 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7c793348685e3d30ed2d2f6e6f8ba817bd0518cbe0bb405782d1d5a46d91ac42" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:31:07 crc kubenswrapper[4875]: E0130 17:31:07.617267 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="c8259d14-22c2-46fe-ae19-81afd949566d" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:31:07 crc kubenswrapper[4875]: I0130 17:31:07.801695 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:07 crc kubenswrapper[4875]: I0130 17:31:07.909864 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-logs\") pod \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\" (UID: \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\") " Jan 30 17:31:07 crc kubenswrapper[4875]: I0130 17:31:07.910006 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-config-data\") pod \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\" (UID: \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\") " Jan 30 17:31:07 crc kubenswrapper[4875]: I0130 17:31:07.910221 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52kxb\" (UniqueName: \"kubernetes.io/projected/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-kube-api-access-52kxb\") pod \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\" (UID: \"12ac901b-5928-4e55-9e2b-71f0bfaf70e7\") " Jan 30 17:31:07 crc kubenswrapper[4875]: I0130 17:31:07.910580 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-logs" (OuterVolumeSpecName: "logs") pod "12ac901b-5928-4e55-9e2b-71f0bfaf70e7" (UID: "12ac901b-5928-4e55-9e2b-71f0bfaf70e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:31:07 crc kubenswrapper[4875]: I0130 17:31:07.914776 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-kube-api-access-52kxb" (OuterVolumeSpecName: "kube-api-access-52kxb") pod "12ac901b-5928-4e55-9e2b-71f0bfaf70e7" (UID: "12ac901b-5928-4e55-9e2b-71f0bfaf70e7"). InnerVolumeSpecName "kube-api-access-52kxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:07 crc kubenswrapper[4875]: I0130 17:31:07.934172 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-config-data" (OuterVolumeSpecName: "config-data") pod "12ac901b-5928-4e55-9e2b-71f0bfaf70e7" (UID: "12ac901b-5928-4e55-9e2b-71f0bfaf70e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.012823 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.012864 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.012875 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52kxb\" (UniqueName: \"kubernetes.io/projected/12ac901b-5928-4e55-9e2b-71f0bfaf70e7-kube-api-access-52kxb\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.058041 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e181f1bb-324d-4c85-849e-b6fc65dfc53f","Type":"ContainerStarted","Data":"4bdbdee48d08073c023c397cc05590bdba1d67c794457d6d5ad51de7fee4ca6a"} Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.060191 4875 generic.go:334] "Generic (PLEG): container finished" podID="12ac901b-5928-4e55-9e2b-71f0bfaf70e7" containerID="1c7f181b2b219648ffc216ac3b8ad3bf2d5e9832fbae03d8ae1a7ad6bfb19d43" exitCode=0 Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.060230 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"12ac901b-5928-4e55-9e2b-71f0bfaf70e7","Type":"ContainerDied","Data":"1c7f181b2b219648ffc216ac3b8ad3bf2d5e9832fbae03d8ae1a7ad6bfb19d43"} Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.060251 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"12ac901b-5928-4e55-9e2b-71f0bfaf70e7","Type":"ContainerDied","Data":"857120e9c0dd0b3320bfd99210799269ffad420123f1dfd998f6ee6b952d27c4"} Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.060270 4875 scope.go:117] "RemoveContainer" containerID="1c7f181b2b219648ffc216ac3b8ad3bf2d5e9832fbae03d8ae1a7ad6bfb19d43" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.060408 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.095987 4875 scope.go:117] "RemoveContainer" containerID="c81d6d2c87242dbd387c26d45a8a4a6d6c2a184b1775f3387d997e6ba9944d39" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.097358 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.097331198 podStartE2EDuration="2.097331198s" podCreationTimestamp="2026-01-30 17:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:31:08.088043342 +0000 UTC m=+2078.635406725" watchObservedRunningTime="2026-01-30 17:31:08.097331198 +0000 UTC m=+2078.644694591" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.121661 4875 scope.go:117] "RemoveContainer" containerID="1c7f181b2b219648ffc216ac3b8ad3bf2d5e9832fbae03d8ae1a7ad6bfb19d43" Jan 30 17:31:08 crc kubenswrapper[4875]: E0130 17:31:08.122121 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c7f181b2b219648ffc216ac3b8ad3bf2d5e9832fbae03d8ae1a7ad6bfb19d43\": container with ID starting with 1c7f181b2b219648ffc216ac3b8ad3bf2d5e9832fbae03d8ae1a7ad6bfb19d43 not found: ID does not exist" containerID="1c7f181b2b219648ffc216ac3b8ad3bf2d5e9832fbae03d8ae1a7ad6bfb19d43" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.122167 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c7f181b2b219648ffc216ac3b8ad3bf2d5e9832fbae03d8ae1a7ad6bfb19d43"} err="failed to get container status \"1c7f181b2b219648ffc216ac3b8ad3bf2d5e9832fbae03d8ae1a7ad6bfb19d43\": rpc error: code = NotFound desc = could not find container \"1c7f181b2b219648ffc216ac3b8ad3bf2d5e9832fbae03d8ae1a7ad6bfb19d43\": container with ID starting with 1c7f181b2b219648ffc216ac3b8ad3bf2d5e9832fbae03d8ae1a7ad6bfb19d43 not found: ID does not exist" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.122196 4875 scope.go:117] "RemoveContainer" containerID="c81d6d2c87242dbd387c26d45a8a4a6d6c2a184b1775f3387d997e6ba9944d39" Jan 30 17:31:08 crc kubenswrapper[4875]: E0130 17:31:08.124049 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c81d6d2c87242dbd387c26d45a8a4a6d6c2a184b1775f3387d997e6ba9944d39\": container with ID starting with c81d6d2c87242dbd387c26d45a8a4a6d6c2a184b1775f3387d997e6ba9944d39 not found: ID does not exist" containerID="c81d6d2c87242dbd387c26d45a8a4a6d6c2a184b1775f3387d997e6ba9944d39" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.124093 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81d6d2c87242dbd387c26d45a8a4a6d6c2a184b1775f3387d997e6ba9944d39"} err="failed to get container status \"c81d6d2c87242dbd387c26d45a8a4a6d6c2a184b1775f3387d997e6ba9944d39\": rpc error: code = NotFound desc = could not find container \"c81d6d2c87242dbd387c26d45a8a4a6d6c2a184b1775f3387d997e6ba9944d39\": container with ID starting with c81d6d2c87242dbd387c26d45a8a4a6d6c2a184b1775f3387d997e6ba9944d39 not found: ID does not exist" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.148410 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.151145 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.162913 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:31:08 crc kubenswrapper[4875]: E0130 17:31:08.163816 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ac901b-5928-4e55-9e2b-71f0bfaf70e7" containerName="nova-kuttl-api-api" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.163842 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ac901b-5928-4e55-9e2b-71f0bfaf70e7" containerName="nova-kuttl-api-api" Jan 30 17:31:08 crc kubenswrapper[4875]: E0130 17:31:08.163874 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ac901b-5928-4e55-9e2b-71f0bfaf70e7" containerName="nova-kuttl-api-log" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.163883 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ac901b-5928-4e55-9e2b-71f0bfaf70e7" containerName="nova-kuttl-api-log" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.164068 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="12ac901b-5928-4e55-9e2b-71f0bfaf70e7" containerName="nova-kuttl-api-log" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.164096 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="12ac901b-5928-4e55-9e2b-71f0bfaf70e7" containerName="nova-kuttl-api-api" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.165089 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.167569 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.173283 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.317404 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11253748-5fbe-477b-8d14-754cce765ecf-config-data\") pod \"nova-kuttl-api-0\" (UID: \"11253748-5fbe-477b-8d14-754cce765ecf\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.317532 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5m7c\" (UniqueName: \"kubernetes.io/projected/11253748-5fbe-477b-8d14-754cce765ecf-kube-api-access-v5m7c\") pod \"nova-kuttl-api-0\" (UID: \"11253748-5fbe-477b-8d14-754cce765ecf\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.317615 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11253748-5fbe-477b-8d14-754cce765ecf-logs\") pod \"nova-kuttl-api-0\" (UID: \"11253748-5fbe-477b-8d14-754cce765ecf\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.418678 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11253748-5fbe-477b-8d14-754cce765ecf-config-data\") pod \"nova-kuttl-api-0\" (UID: \"11253748-5fbe-477b-8d14-754cce765ecf\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.418760 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5m7c\" (UniqueName: \"kubernetes.io/projected/11253748-5fbe-477b-8d14-754cce765ecf-kube-api-access-v5m7c\") pod \"nova-kuttl-api-0\" (UID: \"11253748-5fbe-477b-8d14-754cce765ecf\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.418797 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11253748-5fbe-477b-8d14-754cce765ecf-logs\") pod \"nova-kuttl-api-0\" (UID: \"11253748-5fbe-477b-8d14-754cce765ecf\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.419192 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11253748-5fbe-477b-8d14-754cce765ecf-logs\") pod \"nova-kuttl-api-0\" (UID: \"11253748-5fbe-477b-8d14-754cce765ecf\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.426603 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11253748-5fbe-477b-8d14-754cce765ecf-config-data\") pod \"nova-kuttl-api-0\" (UID: \"11253748-5fbe-477b-8d14-754cce765ecf\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.437491 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5m7c\" (UniqueName: \"kubernetes.io/projected/11253748-5fbe-477b-8d14-754cce765ecf-kube-api-access-v5m7c\") pod \"nova-kuttl-api-0\" (UID: \"11253748-5fbe-477b-8d14-754cce765ecf\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.483218 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:08 crc kubenswrapper[4875]: I0130 17:31:08.908882 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:31:08 crc kubenswrapper[4875]: W0130 17:31:08.913179 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11253748_5fbe_477b_8d14_754cce765ecf.slice/crio-b91eaa77dba865a3efa09f185286e9351df3e044eab7046029b5f1d89d0d5b93 WatchSource:0}: Error finding container b91eaa77dba865a3efa09f185286e9351df3e044eab7046029b5f1d89d0d5b93: Status 404 returned error can't find the container with id b91eaa77dba865a3efa09f185286e9351df3e044eab7046029b5f1d89d0d5b93 Jan 30 17:31:09 crc kubenswrapper[4875]: I0130 17:31:09.069320 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"11253748-5fbe-477b-8d14-754cce765ecf","Type":"ContainerStarted","Data":"b91eaa77dba865a3efa09f185286e9351df3e044eab7046029b5f1d89d0d5b93"} Jan 30 17:31:09 crc kubenswrapper[4875]: I0130 17:31:09.072120 4875 generic.go:334] "Generic (PLEG): container finished" podID="e0b77110-37aa-4395-9028-e4c8bbad8515" containerID="fe5f432383d824e223eceb3c4c1c95d2cdf30bccbb3e20ab48339265253e476f" exitCode=0 Jan 30 17:31:09 crc kubenswrapper[4875]: I0130 17:31:09.072403 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"e0b77110-37aa-4395-9028-e4c8bbad8515","Type":"ContainerDied","Data":"fe5f432383d824e223eceb3c4c1c95d2cdf30bccbb3e20ab48339265253e476f"} Jan 30 17:31:09 crc kubenswrapper[4875]: I0130 17:31:09.145985 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:09 crc kubenswrapper[4875]: I0130 17:31:09.332942 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b77110-37aa-4395-9028-e4c8bbad8515-config-data\") pod \"e0b77110-37aa-4395-9028-e4c8bbad8515\" (UID: \"e0b77110-37aa-4395-9028-e4c8bbad8515\") " Jan 30 17:31:09 crc kubenswrapper[4875]: I0130 17:31:09.333109 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btr2p\" (UniqueName: \"kubernetes.io/projected/e0b77110-37aa-4395-9028-e4c8bbad8515-kube-api-access-btr2p\") pod \"e0b77110-37aa-4395-9028-e4c8bbad8515\" (UID: \"e0b77110-37aa-4395-9028-e4c8bbad8515\") " Jan 30 17:31:09 crc kubenswrapper[4875]: I0130 17:31:09.336674 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b77110-37aa-4395-9028-e4c8bbad8515-kube-api-access-btr2p" (OuterVolumeSpecName: "kube-api-access-btr2p") pod "e0b77110-37aa-4395-9028-e4c8bbad8515" (UID: "e0b77110-37aa-4395-9028-e4c8bbad8515"). InnerVolumeSpecName "kube-api-access-btr2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:09 crc kubenswrapper[4875]: I0130 17:31:09.359673 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b77110-37aa-4395-9028-e4c8bbad8515-config-data" (OuterVolumeSpecName: "config-data") pod "e0b77110-37aa-4395-9028-e4c8bbad8515" (UID: "e0b77110-37aa-4395-9028-e4c8bbad8515"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:31:09 crc kubenswrapper[4875]: I0130 17:31:09.434449 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b77110-37aa-4395-9028-e4c8bbad8515-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:09 crc kubenswrapper[4875]: I0130 17:31:09.434503 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btr2p\" (UniqueName: \"kubernetes.io/projected/e0b77110-37aa-4395-9028-e4c8bbad8515-kube-api-access-btr2p\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:09 crc kubenswrapper[4875]: I0130 17:31:09.983675 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.044755 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2jp2\" (UniqueName: \"kubernetes.io/projected/bc5a12f2-88b7-4686-a4dd-f681febdbb09-kube-api-access-b2jp2\") pod \"bc5a12f2-88b7-4686-a4dd-f681febdbb09\" (UID: \"bc5a12f2-88b7-4686-a4dd-f681febdbb09\") " Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.044797 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc5a12f2-88b7-4686-a4dd-f681febdbb09-config-data\") pod \"bc5a12f2-88b7-4686-a4dd-f681febdbb09\" (UID: \"bc5a12f2-88b7-4686-a4dd-f681febdbb09\") " Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.050779 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5a12f2-88b7-4686-a4dd-f681febdbb09-kube-api-access-b2jp2" (OuterVolumeSpecName: "kube-api-access-b2jp2") pod "bc5a12f2-88b7-4686-a4dd-f681febdbb09" (UID: "bc5a12f2-88b7-4686-a4dd-f681febdbb09"). InnerVolumeSpecName "kube-api-access-b2jp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.070648 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5a12f2-88b7-4686-a4dd-f681febdbb09-config-data" (OuterVolumeSpecName: "config-data") pod "bc5a12f2-88b7-4686-a4dd-f681febdbb09" (UID: "bc5a12f2-88b7-4686-a4dd-f681febdbb09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.082469 4875 generic.go:334] "Generic (PLEG): container finished" podID="bc5a12f2-88b7-4686-a4dd-f681febdbb09" containerID="5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26" exitCode=0 Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.082525 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bc5a12f2-88b7-4686-a4dd-f681febdbb09","Type":"ContainerDied","Data":"5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26"} Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.082550 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bc5a12f2-88b7-4686-a4dd-f681febdbb09","Type":"ContainerDied","Data":"fb6e5346ed979cc1e9ce51f5a72925273c7081a332f62963b3c5a9abbf8e8842"} Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.082567 4875 scope.go:117] "RemoveContainer" containerID="5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.082670 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.086107 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.086140 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"e0b77110-37aa-4395-9028-e4c8bbad8515","Type":"ContainerDied","Data":"02dd79b997abb8da8ee6a78c3310b487555c1d5ffd032dd268040f580239f4b8"} Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.088555 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"11253748-5fbe-477b-8d14-754cce765ecf","Type":"ContainerStarted","Data":"6c5d45d00a1881590cebeb7367fd29414e7246088ce355ebe563b816296a0f91"} Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.088687 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"11253748-5fbe-477b-8d14-754cce765ecf","Type":"ContainerStarted","Data":"b374cfe3fc2bcfcd29e5574f76e7cedf99dfb8bd44a0ca3549589323ee84f9cf"} Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.109223 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.109206475 podStartE2EDuration="2.109206475s" podCreationTimestamp="2026-01-30 17:31:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:31:10.10468431 +0000 UTC m=+2080.652047703" watchObservedRunningTime="2026-01-30 17:31:10.109206475 +0000 UTC m=+2080.656569858" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.146974 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2jp2\" (UniqueName: \"kubernetes.io/projected/bc5a12f2-88b7-4686-a4dd-f681febdbb09-kube-api-access-b2jp2\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.147182 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc5a12f2-88b7-4686-a4dd-f681febdbb09-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.151508 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12ac901b-5928-4e55-9e2b-71f0bfaf70e7" path="/var/lib/kubelet/pods/12ac901b-5928-4e55-9e2b-71f0bfaf70e7/volumes" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.152041 4875 scope.go:117] "RemoveContainer" containerID="5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26" Jan 30 17:31:10 crc kubenswrapper[4875]: E0130 17:31:10.153035 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26\": container with ID starting with 5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26 not found: ID does not exist" containerID="5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.153087 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26"} err="failed to get container status \"5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26\": rpc error: code = NotFound desc = could not find container \"5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26\": container with ID starting with 5cbb82ca7aabf3ee6d84971b498a312e21f28278a1a5feb134c2c0172a741f26 not found: ID does not exist" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.153117 4875 scope.go:117] "RemoveContainer" containerID="fe5f432383d824e223eceb3c4c1c95d2cdf30bccbb3e20ab48339265253e476f" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.183091 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.204903 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.210919 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.223320 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:31:10 crc kubenswrapper[4875]: E0130 17:31:10.223886 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc5a12f2-88b7-4686-a4dd-f681febdbb09" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.223908 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc5a12f2-88b7-4686-a4dd-f681febdbb09" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:31:10 crc kubenswrapper[4875]: E0130 17:31:10.223927 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b77110-37aa-4395-9028-e4c8bbad8515" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.223935 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b77110-37aa-4395-9028-e4c8bbad8515" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.224081 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b77110-37aa-4395-9028-e4c8bbad8515" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.224092 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc5a12f2-88b7-4686-a4dd-f681febdbb09" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.224710 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.228550 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.235293 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.246183 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.248484 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.248558 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2kg\" (UniqueName: \"kubernetes.io/projected/69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e-kube-api-access-4z2kg\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.252303 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.253575 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.255893 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-compute-fake1-compute-config-data" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.261446 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.350394 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.350468 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z2kg\" (UniqueName: \"kubernetes.io/projected/69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e-kube-api-access-4z2kg\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.354336 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.369360 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z2kg\" (UniqueName: \"kubernetes.io/projected/69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e-kube-api-access-4z2kg\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.452268 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"42bccab8-df28-43d4-92ae-d27a388ae8e4\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.452381 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j85qp\" (UniqueName: \"kubernetes.io/projected/42bccab8-df28-43d4-92ae-d27a388ae8e4-kube-api-access-j85qp\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"42bccab8-df28-43d4-92ae-d27a388ae8e4\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.544473 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.554234 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j85qp\" (UniqueName: \"kubernetes.io/projected/42bccab8-df28-43d4-92ae-d27a388ae8e4-kube-api-access-j85qp\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"42bccab8-df28-43d4-92ae-d27a388ae8e4\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.554466 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"42bccab8-df28-43d4-92ae-d27a388ae8e4\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.558428 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"42bccab8-df28-43d4-92ae-d27a388ae8e4\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.584128 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j85qp\" (UniqueName: \"kubernetes.io/projected/42bccab8-df28-43d4-92ae-d27a388ae8e4-kube-api-access-j85qp\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"42bccab8-df28-43d4-92ae-d27a388ae8e4\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:10 crc kubenswrapper[4875]: I0130 17:31:10.869614 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:11 crc kubenswrapper[4875]: I0130 17:31:11.004391 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:31:11 crc kubenswrapper[4875]: W0130 17:31:11.017422 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69b49ed4_8de7_45eb_9dd1_ec5e27e4a50e.slice/crio-49da033ffb4f049a5e0cb5e2aa5a09706c3b0c52fe0944f6b1f03658d40bef6a WatchSource:0}: Error finding container 49da033ffb4f049a5e0cb5e2aa5a09706c3b0c52fe0944f6b1f03658d40bef6a: Status 404 returned error can't find the container with id 49da033ffb4f049a5e0cb5e2aa5a09706c3b0c52fe0944f6b1f03658d40bef6a Jan 30 17:31:11 crc kubenswrapper[4875]: I0130 17:31:11.098828 4875 generic.go:334] "Generic (PLEG): container finished" podID="c8259d14-22c2-46fe-ae19-81afd949566d" containerID="7c793348685e3d30ed2d2f6e6f8ba817bd0518cbe0bb405782d1d5a46d91ac42" exitCode=0 Jan 30 17:31:11 crc kubenswrapper[4875]: I0130 17:31:11.098929 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"c8259d14-22c2-46fe-ae19-81afd949566d","Type":"ContainerDied","Data":"7c793348685e3d30ed2d2f6e6f8ba817bd0518cbe0bb405782d1d5a46d91ac42"} Jan 30 17:31:11 crc kubenswrapper[4875]: I0130 17:31:11.104730 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e","Type":"ContainerStarted","Data":"49da033ffb4f049a5e0cb5e2aa5a09706c3b0c52fe0944f6b1f03658d40bef6a"} Jan 30 17:31:11 crc kubenswrapper[4875]: I0130 17:31:11.214898 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:11 crc kubenswrapper[4875]: I0130 17:31:11.372822 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nccvh\" (UniqueName: \"kubernetes.io/projected/c8259d14-22c2-46fe-ae19-81afd949566d-kube-api-access-nccvh\") pod \"c8259d14-22c2-46fe-ae19-81afd949566d\" (UID: \"c8259d14-22c2-46fe-ae19-81afd949566d\") " Jan 30 17:31:11 crc kubenswrapper[4875]: I0130 17:31:11.372896 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8259d14-22c2-46fe-ae19-81afd949566d-config-data\") pod \"c8259d14-22c2-46fe-ae19-81afd949566d\" (UID: \"c8259d14-22c2-46fe-ae19-81afd949566d\") " Jan 30 17:31:11 crc kubenswrapper[4875]: I0130 17:31:11.382710 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8259d14-22c2-46fe-ae19-81afd949566d-kube-api-access-nccvh" (OuterVolumeSpecName: "kube-api-access-nccvh") pod "c8259d14-22c2-46fe-ae19-81afd949566d" (UID: "c8259d14-22c2-46fe-ae19-81afd949566d"). InnerVolumeSpecName "kube-api-access-nccvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:11 crc kubenswrapper[4875]: I0130 17:31:11.394285 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 30 17:31:11 crc kubenswrapper[4875]: W0130 17:31:11.398603 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42bccab8_df28_43d4_92ae_d27a388ae8e4.slice/crio-af06ce4a796a8183f48a75f6f8a7e0be1340a9a91eefff695ca9547f48fb9016 WatchSource:0}: Error finding container af06ce4a796a8183f48a75f6f8a7e0be1340a9a91eefff695ca9547f48fb9016: Status 404 returned error can't find the container with id af06ce4a796a8183f48a75f6f8a7e0be1340a9a91eefff695ca9547f48fb9016 Jan 30 17:31:11 crc kubenswrapper[4875]: I0130 17:31:11.398786 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8259d14-22c2-46fe-ae19-81afd949566d-config-data" (OuterVolumeSpecName: "config-data") pod "c8259d14-22c2-46fe-ae19-81afd949566d" (UID: "c8259d14-22c2-46fe-ae19-81afd949566d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:31:11 crc kubenswrapper[4875]: I0130 17:31:11.431299 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:11 crc kubenswrapper[4875]: I0130 17:31:11.474811 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nccvh\" (UniqueName: \"kubernetes.io/projected/c8259d14-22c2-46fe-ae19-81afd949566d-kube-api-access-nccvh\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:11 crc kubenswrapper[4875]: I0130 17:31:11.474854 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8259d14-22c2-46fe-ae19-81afd949566d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.113422 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e","Type":"ContainerStarted","Data":"4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd"} Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.113527 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.115717 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"42bccab8-df28-43d4-92ae-d27a388ae8e4","Type":"ContainerStarted","Data":"b1dbbf72b2e4f14b2425b30ad6c612b096fb064f397c13b35758b04dfa0d7acb"} Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.115749 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"42bccab8-df28-43d4-92ae-d27a388ae8e4","Type":"ContainerStarted","Data":"af06ce4a796a8183f48a75f6f8a7e0be1340a9a91eefff695ca9547f48fb9016"} Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.115893 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.118039 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"c8259d14-22c2-46fe-ae19-81afd949566d","Type":"ContainerDied","Data":"eb1c0bc3e22d90408224b3183cd0118bbf35148cae8d403a53754598c977b8e2"} Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.118079 4875 scope.go:117] "RemoveContainer" containerID="7c793348685e3d30ed2d2f6e6f8ba817bd0518cbe0bb405782d1d5a46d91ac42" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.118151 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.136353 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.136334627 podStartE2EDuration="2.136334627s" podCreationTimestamp="2026-01-30 17:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:31:12.135418558 +0000 UTC m=+2082.682781981" watchObservedRunningTime="2026-01-30 17:31:12.136334627 +0000 UTC m=+2082.683698010" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.152464 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5a12f2-88b7-4686-a4dd-f681febdbb09" path="/var/lib/kubelet/pods/bc5a12f2-88b7-4686-a4dd-f681febdbb09/volumes" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.153935 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0b77110-37aa-4395-9028-e4c8bbad8515" path="/var/lib/kubelet/pods/e0b77110-37aa-4395-9028-e4c8bbad8515/volumes" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.158799 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.193538 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podStartSLOduration=2.193519441 podStartE2EDuration="2.193519441s" podCreationTimestamp="2026-01-30 17:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:31:12.153407302 +0000 UTC m=+2082.700770735" watchObservedRunningTime="2026-01-30 17:31:12.193519441 +0000 UTC m=+2082.740882834" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.210842 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.217919 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.232889 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:31:12 crc kubenswrapper[4875]: E0130 17:31:12.233193 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8259d14-22c2-46fe-ae19-81afd949566d" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.233208 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8259d14-22c2-46fe-ae19-81afd949566d" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.233343 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8259d14-22c2-46fe-ae19-81afd949566d" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.233974 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.234047 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.239565 4875 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.397317 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0392f69a-9df6-49a5-b17a-0d39c748d83c-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"0392f69a-9df6-49a5-b17a-0d39c748d83c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.397647 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87cz6\" (UniqueName: \"kubernetes.io/projected/0392f69a-9df6-49a5-b17a-0d39c748d83c-kube-api-access-87cz6\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"0392f69a-9df6-49a5-b17a-0d39c748d83c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.499245 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0392f69a-9df6-49a5-b17a-0d39c748d83c-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"0392f69a-9df6-49a5-b17a-0d39c748d83c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.499317 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87cz6\" (UniqueName: \"kubernetes.io/projected/0392f69a-9df6-49a5-b17a-0d39c748d83c-kube-api-access-87cz6\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"0392f69a-9df6-49a5-b17a-0d39c748d83c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.512864 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0392f69a-9df6-49a5-b17a-0d39c748d83c-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"0392f69a-9df6-49a5-b17a-0d39c748d83c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.515026 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87cz6\" (UniqueName: \"kubernetes.io/projected/0392f69a-9df6-49a5-b17a-0d39c748d83c-kube-api-access-87cz6\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"0392f69a-9df6-49a5-b17a-0d39c748d83c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.551014 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:12 crc kubenswrapper[4875]: I0130 17:31:12.954909 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:31:12 crc kubenswrapper[4875]: W0130 17:31:12.962239 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0392f69a_9df6_49a5_b17a_0d39c748d83c.slice/crio-fa2fed679eb1c11ae5a4d4594219eadce6e1b1d81b4a542514ace275c9aceb20 WatchSource:0}: Error finding container fa2fed679eb1c11ae5a4d4594219eadce6e1b1d81b4a542514ace275c9aceb20: Status 404 returned error can't find the container with id fa2fed679eb1c11ae5a4d4594219eadce6e1b1d81b4a542514ace275c9aceb20 Jan 30 17:31:13 crc kubenswrapper[4875]: I0130 17:31:13.137748 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"0392f69a-9df6-49a5-b17a-0d39c748d83c","Type":"ContainerStarted","Data":"fa2fed679eb1c11ae5a4d4594219eadce6e1b1d81b4a542514ace275c9aceb20"} Jan 30 17:31:14 crc kubenswrapper[4875]: I0130 17:31:14.154367 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8259d14-22c2-46fe-ae19-81afd949566d" path="/var/lib/kubelet/pods/c8259d14-22c2-46fe-ae19-81afd949566d/volumes" Jan 30 17:31:14 crc kubenswrapper[4875]: I0130 17:31:14.157412 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"0392f69a-9df6-49a5-b17a-0d39c748d83c","Type":"ContainerStarted","Data":"c2e3d0678d4406ae037d5beef2b160fe10554e77d885be2da8152e7c88d62dba"} Jan 30 17:31:14 crc kubenswrapper[4875]: I0130 17:31:14.183167 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.183139578 podStartE2EDuration="2.183139578s" podCreationTimestamp="2026-01-30 17:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:31:14.176060291 +0000 UTC m=+2084.723423754" watchObservedRunningTime="2026-01-30 17:31:14.183139578 +0000 UTC m=+2084.730502981" Jan 30 17:31:15 crc kubenswrapper[4875]: I0130 17:31:15.175486 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:16 crc kubenswrapper[4875]: I0130 17:31:16.432066 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:16 crc kubenswrapper[4875]: I0130 17:31:16.453578 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:17 crc kubenswrapper[4875]: I0130 17:31:17.191834 4875 generic.go:334] "Generic (PLEG): container finished" podID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerID="b1dbbf72b2e4f14b2425b30ad6c612b096fb064f397c13b35758b04dfa0d7acb" exitCode=0 Jan 30 17:31:17 crc kubenswrapper[4875]: I0130 17:31:17.191917 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"42bccab8-df28-43d4-92ae-d27a388ae8e4","Type":"ContainerDied","Data":"b1dbbf72b2e4f14b2425b30ad6c612b096fb064f397c13b35758b04dfa0d7acb"} Jan 30 17:31:17 crc kubenswrapper[4875]: I0130 17:31:17.192987 4875 scope.go:117] "RemoveContainer" containerID="b1dbbf72b2e4f14b2425b30ad6c612b096fb064f397c13b35758b04dfa0d7acb" Jan 30 17:31:17 crc kubenswrapper[4875]: I0130 17:31:17.229194 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:18 crc kubenswrapper[4875]: I0130 17:31:18.201281 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"42bccab8-df28-43d4-92ae-d27a388ae8e4","Type":"ContainerStarted","Data":"f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4"} Jan 30 17:31:18 crc kubenswrapper[4875]: I0130 17:31:18.202002 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:18 crc kubenswrapper[4875]: I0130 17:31:18.226147 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:18 crc kubenswrapper[4875]: I0130 17:31:18.483961 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:18 crc kubenswrapper[4875]: I0130 17:31:18.484030 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:19 crc kubenswrapper[4875]: I0130 17:31:19.566773 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="11253748-5fbe-477b-8d14-754cce765ecf" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:31:19 crc kubenswrapper[4875]: I0130 17:31:19.567659 4875 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="11253748-5fbe-477b-8d14-754cce765ecf" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:31:20 crc kubenswrapper[4875]: I0130 17:31:20.578003 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:20 crc kubenswrapper[4875]: E0130 17:31:20.870535 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4 is running failed: container process not found" containerID="f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:20 crc kubenswrapper[4875]: E0130 17:31:20.870566 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4 is running failed: container process not found" containerID="f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:20 crc kubenswrapper[4875]: E0130 17:31:20.871633 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4 is running failed: container process not found" containerID="f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:20 crc kubenswrapper[4875]: E0130 17:31:20.871695 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4 is running failed: container process not found" containerID="f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:20 crc kubenswrapper[4875]: E0130 17:31:20.871947 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4 is running failed: container process not found" containerID="f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:20 crc kubenswrapper[4875]: E0130 17:31:20.872002 4875 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4 is running failed: container process not found" probeType="Liveness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:31:20 crc kubenswrapper[4875]: E0130 17:31:20.872522 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4 is running failed: container process not found" containerID="f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:20 crc kubenswrapper[4875]: E0130 17:31:20.872565 4875 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4 is running failed: container process not found" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:31:21 crc kubenswrapper[4875]: I0130 17:31:21.234124 4875 generic.go:334] "Generic (PLEG): container finished" podID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerID="f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4" exitCode=0 Jan 30 17:31:21 crc kubenswrapper[4875]: I0130 17:31:21.234193 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"42bccab8-df28-43d4-92ae-d27a388ae8e4","Type":"ContainerDied","Data":"f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4"} Jan 30 17:31:21 crc kubenswrapper[4875]: I0130 17:31:21.234229 4875 scope.go:117] "RemoveContainer" containerID="b1dbbf72b2e4f14b2425b30ad6c612b096fb064f397c13b35758b04dfa0d7acb" Jan 30 17:31:21 crc kubenswrapper[4875]: I0130 17:31:21.235388 4875 scope.go:117] "RemoveContainer" containerID="f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4" Jan 30 17:31:21 crc kubenswrapper[4875]: E0130 17:31:21.236564 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-kuttl-cell1-compute-fake1-compute-compute\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-kuttl-cell1-compute-fake1-compute-compute pod=nova-kuttl-cell1-compute-fake1-compute-0_nova-kuttl-default(42bccab8-df28-43d4-92ae-d27a388ae8e4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" Jan 30 17:31:22 crc kubenswrapper[4875]: I0130 17:31:22.583365 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:25 crc kubenswrapper[4875]: I0130 17:31:25.870344 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:25 crc kubenswrapper[4875]: I0130 17:31:25.871314 4875 scope.go:117] "RemoveContainer" containerID="f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4" Jan 30 17:31:25 crc kubenswrapper[4875]: E0130 17:31:25.871625 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-kuttl-cell1-compute-fake1-compute-compute\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-kuttl-cell1-compute-fake1-compute-compute pod=nova-kuttl-cell1-compute-fake1-compute-0_nova-kuttl-default(42bccab8-df28-43d4-92ae-d27a388ae8e4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" Jan 30 17:31:28 crc kubenswrapper[4875]: I0130 17:31:28.487209 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:28 crc kubenswrapper[4875]: I0130 17:31:28.487786 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:28 crc kubenswrapper[4875]: I0130 17:31:28.488722 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:28 crc kubenswrapper[4875]: I0130 17:31:28.490739 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:29 crc kubenswrapper[4875]: I0130 17:31:29.297479 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:29 crc kubenswrapper[4875]: I0130 17:31:29.301033 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:30 crc kubenswrapper[4875]: I0130 17:31:30.870095 4875 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:30 crc kubenswrapper[4875]: I0130 17:31:30.871074 4875 scope.go:117] "RemoveContainer" containerID="f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4" Jan 30 17:31:31 crc kubenswrapper[4875]: I0130 17:31:31.316423 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"42bccab8-df28-43d4-92ae-d27a388ae8e4","Type":"ContainerStarted","Data":"011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f"} Jan 30 17:31:31 crc kubenswrapper[4875]: I0130 17:31:31.317496 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:31 crc kubenswrapper[4875]: I0130 17:31:31.357866 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.110901 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.116116 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.121958 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-czqhq"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.126899 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-kfj5c"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.147117 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4352d42d-6f43-4899-95e8-cd45c91c2a6e" path="/var/lib/kubelet/pods/4352d42d-6f43-4899-95e8-cd45c91c2a6e/volumes" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.147669 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5210e64b-5ccb-46aa-9797-f42f13d13eab" path="/var/lib/kubelet/pods/5210e64b-5ccb-46aa-9797-f42f13d13eab/volumes" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.148124 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.154078 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-789gk"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.168408 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.255065 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.255269 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd" gracePeriod=30 Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.262778 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.273969 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-6s2rc"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.317773 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell0b6ba-account-delete-nxvcr"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.318738 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.331384 4875 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" secret="" err="secret \"nova-nova-kuttl-dockercfg-wlxxk\" not found" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.348135 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0b6ba-account-delete-nxvcr"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.419762 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5p57\" (UniqueName: \"kubernetes.io/projected/7d7f3ab2-0758-4f19-8786-5d9cf4262bbe-kube-api-access-n5p57\") pod \"novacell0b6ba-account-delete-nxvcr\" (UID: \"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe\") " pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.419874 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d7f3ab2-0758-4f19-8786-5d9cf4262bbe-operator-scripts\") pod \"novacell0b6ba-account-delete-nxvcr\" (UID: \"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe\") " pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" Jan 30 17:31:32 crc kubenswrapper[4875]: E0130 17:31:32.422147 4875 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 30 17:31:32 crc kubenswrapper[4875]: E0130 17:31:32.422262 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data podName:42bccab8-df28-43d4-92ae-d27a388ae8e4 nodeName:}" failed. No retries permitted until 2026-01-30 17:31:32.922235388 +0000 UTC m=+2103.469598781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "42bccab8-df28-43d4-92ae-d27a388ae8e4") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.426866 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.427131 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="5452c976-86c4-4bc8-8610-f33467f8715c" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://8cb5e3fcd22f6993c310c1669c45bbec32d03b17568939d6f0e905f4f8994ff4" gracePeriod=30 Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.456597 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.456784 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="0392f69a-9df6-49a5-b17a-0d39c748d83c" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://c2e3d0678d4406ae037d5beef2b160fe10554e77d885be2da8152e7c88d62dba" gracePeriod=30 Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.471845 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.489643 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapic51e-account-delete-jr25c"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.490741 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.511448 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-26252"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.521415 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5p57\" (UniqueName: \"kubernetes.io/projected/7d7f3ab2-0758-4f19-8786-5d9cf4262bbe-kube-api-access-n5p57\") pod \"novacell0b6ba-account-delete-nxvcr\" (UID: \"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe\") " pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.521511 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d7f3ab2-0758-4f19-8786-5d9cf4262bbe-operator-scripts\") pod \"novacell0b6ba-account-delete-nxvcr\" (UID: \"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe\") " pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.522207 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d7f3ab2-0758-4f19-8786-5d9cf4262bbe-operator-scripts\") pod \"novacell0b6ba-account-delete-nxvcr\" (UID: \"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe\") " pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.523707 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.524088 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="11253748-5fbe-477b-8d14-754cce765ecf" containerName="nova-kuttl-api-api" containerID="cri-o://6c5d45d00a1881590cebeb7367fd29414e7246088ce355ebe563b816296a0f91" gracePeriod=30 Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.524394 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="11253748-5fbe-477b-8d14-754cce765ecf" containerName="nova-kuttl-api-log" containerID="cri-o://b374cfe3fc2bcfcd29e5574f76e7cedf99dfb8bd44a0ca3549589323ee84f9cf" gracePeriod=30 Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.540705 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.540875 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="e181f1bb-324d-4c85-849e-b6fc65dfc53f" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://4bdbdee48d08073c023c397cc05590bdba1d67c794457d6d5ad51de7fee4ca6a" gracePeriod=30 Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.555448 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5p57\" (UniqueName: \"kubernetes.io/projected/7d7f3ab2-0758-4f19-8786-5d9cf4262bbe-kube-api-access-n5p57\") pod \"novacell0b6ba-account-delete-nxvcr\" (UID: \"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe\") " pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.557747 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapic51e-account-delete-jr25c"] Jan 30 17:31:32 crc kubenswrapper[4875]: E0130 17:31:32.562292 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2e3d0678d4406ae037d5beef2b160fe10554e77d885be2da8152e7c88d62dba" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:31:32 crc kubenswrapper[4875]: E0130 17:31:32.563918 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2e3d0678d4406ae037d5beef2b160fe10554e77d885be2da8152e7c88d62dba" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:31:32 crc kubenswrapper[4875]: E0130 17:31:32.565475 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2e3d0678d4406ae037d5beef2b160fe10554e77d885be2da8152e7c88d62dba" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:31:32 crc kubenswrapper[4875]: E0130 17:31:32.565508 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="0392f69a-9df6-49a5-b17a-0d39c748d83c" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.567923 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.568245 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerName="nova-kuttl-metadata-log" containerID="cri-o://5cab9fe7b3bab5032944f6c000616458aaca867775a7fc55b021104df998a0dc" gracePeriod=30 Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.568739 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://32acfdf77c301f79b044ad2dc8e01ccddcea144d0c8e3cfdd4cbdcc4e03870e0" gracePeriod=30 Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.607027 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell1717a-account-delete-bsmr5"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.608141 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.620562 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1717a-account-delete-bsmr5"] Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.623703 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76-operator-scripts\") pod \"novaapic51e-account-delete-jr25c\" (UID: \"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76\") " pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.623757 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk7w9\" (UniqueName: \"kubernetes.io/projected/8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76-kube-api-access-rk7w9\") pod \"novaapic51e-account-delete-jr25c\" (UID: \"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76\") " pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.638371 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.728410 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4msz\" (UniqueName: \"kubernetes.io/projected/6a7194d0-d476-4ada-8048-9e3366650bdd-kube-api-access-j4msz\") pod \"novacell1717a-account-delete-bsmr5\" (UID: \"6a7194d0-d476-4ada-8048-9e3366650bdd\") " pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.728467 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76-operator-scripts\") pod \"novaapic51e-account-delete-jr25c\" (UID: \"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76\") " pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.728500 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk7w9\" (UniqueName: \"kubernetes.io/projected/8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76-kube-api-access-rk7w9\") pod \"novaapic51e-account-delete-jr25c\" (UID: \"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76\") " pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.728571 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7194d0-d476-4ada-8048-9e3366650bdd-operator-scripts\") pod \"novacell1717a-account-delete-bsmr5\" (UID: \"6a7194d0-d476-4ada-8048-9e3366650bdd\") " pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.729573 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76-operator-scripts\") pod \"novaapic51e-account-delete-jr25c\" (UID: \"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76\") " pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.762735 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk7w9\" (UniqueName: \"kubernetes.io/projected/8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76-kube-api-access-rk7w9\") pod \"novaapic51e-account-delete-jr25c\" (UID: \"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76\") " pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.824020 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.830411 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7194d0-d476-4ada-8048-9e3366650bdd-operator-scripts\") pod \"novacell1717a-account-delete-bsmr5\" (UID: \"6a7194d0-d476-4ada-8048-9e3366650bdd\") " pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.830508 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4msz\" (UniqueName: \"kubernetes.io/projected/6a7194d0-d476-4ada-8048-9e3366650bdd-kube-api-access-j4msz\") pod \"novacell1717a-account-delete-bsmr5\" (UID: \"6a7194d0-d476-4ada-8048-9e3366650bdd\") " pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.832163 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7194d0-d476-4ada-8048-9e3366650bdd-operator-scripts\") pod \"novacell1717a-account-delete-bsmr5\" (UID: \"6a7194d0-d476-4ada-8048-9e3366650bdd\") " pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.864128 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4msz\" (UniqueName: \"kubernetes.io/projected/6a7194d0-d476-4ada-8048-9e3366650bdd-kube-api-access-j4msz\") pod \"novacell1717a-account-delete-bsmr5\" (UID: \"6a7194d0-d476-4ada-8048-9e3366650bdd\") " pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" Jan 30 17:31:32 crc kubenswrapper[4875]: E0130 17:31:32.934906 4875 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 30 17:31:32 crc kubenswrapper[4875]: E0130 17:31:32.934993 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data podName:42bccab8-df28-43d4-92ae-d27a388ae8e4 nodeName:}" failed. No retries permitted until 2026-01-30 17:31:33.934970835 +0000 UTC m=+2104.482334218 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "42bccab8-df28-43d4-92ae-d27a388ae8e4") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 30 17:31:32 crc kubenswrapper[4875]: I0130 17:31:32.974679 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.146406 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0b6ba-account-delete-nxvcr"] Jan 30 17:31:33 crc kubenswrapper[4875]: W0130 17:31:33.191674 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d7f3ab2_0758_4f19_8786_5d9cf4262bbe.slice/crio-b42ec382ca845c3f2b7570485db076487ee9563cb5f1ddf8a3d188f41307b746 WatchSource:0}: Error finding container b42ec382ca845c3f2b7570485db076487ee9563cb5f1ddf8a3d188f41307b746: Status 404 returned error can't find the container with id b42ec382ca845c3f2b7570485db076487ee9563cb5f1ddf8a3d188f41307b746 Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.310112 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapic51e-account-delete-jr25c"] Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.337907 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" event={"ID":"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76","Type":"ContainerStarted","Data":"27f98a645443f9187c1742661ff0f1f0775c91b012973aeee22634afb7518dd2"} Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.339665 4875 generic.go:334] "Generic (PLEG): container finished" podID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerID="5cab9fe7b3bab5032944f6c000616458aaca867775a7fc55b021104df998a0dc" exitCode=143 Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.339786 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5b4178bb-44e0-4346-a26a-de1835e64c11","Type":"ContainerDied","Data":"5cab9fe7b3bab5032944f6c000616458aaca867775a7fc55b021104df998a0dc"} Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.343017 4875 generic.go:334] "Generic (PLEG): container finished" podID="11253748-5fbe-477b-8d14-754cce765ecf" containerID="b374cfe3fc2bcfcd29e5574f76e7cedf99dfb8bd44a0ca3549589323ee84f9cf" exitCode=143 Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.343092 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"11253748-5fbe-477b-8d14-754cce765ecf","Type":"ContainerDied","Data":"b374cfe3fc2bcfcd29e5574f76e7cedf99dfb8bd44a0ca3549589323ee84f9cf"} Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.344509 4875 generic.go:334] "Generic (PLEG): container finished" podID="5452c976-86c4-4bc8-8610-f33467f8715c" containerID="8cb5e3fcd22f6993c310c1669c45bbec32d03b17568939d6f0e905f4f8994ff4" exitCode=0 Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.344562 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"5452c976-86c4-4bc8-8610-f33467f8715c","Type":"ContainerDied","Data":"8cb5e3fcd22f6993c310c1669c45bbec32d03b17568939d6f0e905f4f8994ff4"} Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.346178 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" containerID="cri-o://011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" gracePeriod=30 Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.346540 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" event={"ID":"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe","Type":"ContainerStarted","Data":"b42ec382ca845c3f2b7570485db076487ee9563cb5f1ddf8a3d188f41307b746"} Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.498886 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1717a-account-delete-bsmr5"] Jan 30 17:31:33 crc kubenswrapper[4875]: W0130 17:31:33.506726 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a7194d0_d476_4ada_8048_9e3366650bdd.slice/crio-7f9b712ba335d9a257caad59044fbacc0fc91b4df814c78bafb1c783f600095d WatchSource:0}: Error finding container 7f9b712ba335d9a257caad59044fbacc0fc91b4df814c78bafb1c783f600095d: Status 404 returned error can't find the container with id 7f9b712ba335d9a257caad59044fbacc0fc91b4df814c78bafb1c783f600095d Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.787216 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.957134 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5452c976-86c4-4bc8-8610-f33467f8715c-config-data\") pod \"5452c976-86c4-4bc8-8610-f33467f8715c\" (UID: \"5452c976-86c4-4bc8-8610-f33467f8715c\") " Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.958077 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dk22\" (UniqueName: \"kubernetes.io/projected/5452c976-86c4-4bc8-8610-f33467f8715c-kube-api-access-9dk22\") pod \"5452c976-86c4-4bc8-8610-f33467f8715c\" (UID: \"5452c976-86c4-4bc8-8610-f33467f8715c\") " Jan 30 17:31:33 crc kubenswrapper[4875]: E0130 17:31:33.958621 4875 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 30 17:31:33 crc kubenswrapper[4875]: E0130 17:31:33.958697 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data podName:42bccab8-df28-43d4-92ae-d27a388ae8e4 nodeName:}" failed. No retries permitted until 2026-01-30 17:31:35.958675285 +0000 UTC m=+2106.506038668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "42bccab8-df28-43d4-92ae-d27a388ae8e4") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.970742 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5452c976-86c4-4bc8-8610-f33467f8715c-kube-api-access-9dk22" (OuterVolumeSpecName: "kube-api-access-9dk22") pod "5452c976-86c4-4bc8-8610-f33467f8715c" (UID: "5452c976-86c4-4bc8-8610-f33467f8715c"). InnerVolumeSpecName "kube-api-access-9dk22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:33 crc kubenswrapper[4875]: I0130 17:31:33.979140 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5452c976-86c4-4bc8-8610-f33467f8715c-config-data" (OuterVolumeSpecName: "config-data") pod "5452c976-86c4-4bc8-8610-f33467f8715c" (UID: "5452c976-86c4-4bc8-8610-f33467f8715c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.061169 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5452c976-86c4-4bc8-8610-f33467f8715c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.061202 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dk22\" (UniqueName: \"kubernetes.io/projected/5452c976-86c4-4bc8-8610-f33467f8715c-kube-api-access-9dk22\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.149510 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09788124-6879-4677-83af-a4e8cc11f838" path="/var/lib/kubelet/pods/09788124-6879-4677-83af-a4e8cc11f838/volumes" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.150063 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="220f50f1-8337-455d-b973-24e9d7b1917c" path="/var/lib/kubelet/pods/220f50f1-8337-455d-b973-24e9d7b1917c/volumes" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.150533 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="222cd988-6d37-47a7-a67b-bb75d55912f9" path="/var/lib/kubelet/pods/222cd988-6d37-47a7-a67b-bb75d55912f9/volumes" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.365382 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"5452c976-86c4-4bc8-8610-f33467f8715c","Type":"ContainerDied","Data":"c252807e8df8c727b5a65229585793127dfbea3ab2a003a32895c8d2845db9a6"} Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.365435 4875 scope.go:117] "RemoveContainer" containerID="8cb5e3fcd22f6993c310c1669c45bbec32d03b17568939d6f0e905f4f8994ff4" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.365551 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.369400 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" event={"ID":"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe","Type":"ContainerStarted","Data":"bbfa23785eb18fe9d0fd851a0d2655426dfe59eae7a4164d11eb6912e983cb47"} Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.373152 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" event={"ID":"6a7194d0-d476-4ada-8048-9e3366650bdd","Type":"ContainerStarted","Data":"255aef2a1011cac29ec4a3195419ccb6464779ea8efb5b71a779497949cb44d4"} Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.373198 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" event={"ID":"6a7194d0-d476-4ada-8048-9e3366650bdd","Type":"ContainerStarted","Data":"7f9b712ba335d9a257caad59044fbacc0fc91b4df814c78bafb1c783f600095d"} Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.375520 4875 generic.go:334] "Generic (PLEG): container finished" podID="0392f69a-9df6-49a5-b17a-0d39c748d83c" containerID="c2e3d0678d4406ae037d5beef2b160fe10554e77d885be2da8152e7c88d62dba" exitCode=0 Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.375604 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"0392f69a-9df6-49a5-b17a-0d39c748d83c","Type":"ContainerDied","Data":"c2e3d0678d4406ae037d5beef2b160fe10554e77d885be2da8152e7c88d62dba"} Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.379737 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" event={"ID":"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76","Type":"ContainerStarted","Data":"14c44674ed3f3726b851288b88991b9bbb5d77f52fb0bcc14b14b104a80d17f8"} Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.387839 4875 generic.go:334] "Generic (PLEG): container finished" podID="e181f1bb-324d-4c85-849e-b6fc65dfc53f" containerID="4bdbdee48d08073c023c397cc05590bdba1d67c794457d6d5ad51de7fee4ca6a" exitCode=0 Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.387913 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e181f1bb-324d-4c85-849e-b6fc65dfc53f","Type":"ContainerDied","Data":"4bdbdee48d08073c023c397cc05590bdba1d67c794457d6d5ad51de7fee4ca6a"} Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.395041 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.405395 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.409511 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" podStartSLOduration=2.409493378 podStartE2EDuration="2.409493378s" podCreationTimestamp="2026-01-30 17:31:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:31:34.401797933 +0000 UTC m=+2104.949161316" watchObservedRunningTime="2026-01-30 17:31:34.409493378 +0000 UTC m=+2104.956856761" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.429654 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" podStartSLOduration=2.4296338410000002 podStartE2EDuration="2.429633841s" podCreationTimestamp="2026-01-30 17:31:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:31:34.416522462 +0000 UTC m=+2104.963885845" watchObservedRunningTime="2026-01-30 17:31:34.429633841 +0000 UTC m=+2104.976997224" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.445593 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" podStartSLOduration=2.445564919 podStartE2EDuration="2.445564919s" podCreationTimestamp="2026-01-30 17:31:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:31:34.434793645 +0000 UTC m=+2104.982157028" watchObservedRunningTime="2026-01-30 17:31:34.445564919 +0000 UTC m=+2104.992928302" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.552371 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.675225 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktbwx\" (UniqueName: \"kubernetes.io/projected/e181f1bb-324d-4c85-849e-b6fc65dfc53f-kube-api-access-ktbwx\") pod \"e181f1bb-324d-4c85-849e-b6fc65dfc53f\" (UID: \"e181f1bb-324d-4c85-849e-b6fc65dfc53f\") " Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.675649 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e181f1bb-324d-4c85-849e-b6fc65dfc53f-config-data\") pod \"e181f1bb-324d-4c85-849e-b6fc65dfc53f\" (UID: \"e181f1bb-324d-4c85-849e-b6fc65dfc53f\") " Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.680676 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e181f1bb-324d-4c85-849e-b6fc65dfc53f-kube-api-access-ktbwx" (OuterVolumeSpecName: "kube-api-access-ktbwx") pod "e181f1bb-324d-4c85-849e-b6fc65dfc53f" (UID: "e181f1bb-324d-4c85-849e-b6fc65dfc53f"). InnerVolumeSpecName "kube-api-access-ktbwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.707664 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e181f1bb-324d-4c85-849e-b6fc65dfc53f-config-data" (OuterVolumeSpecName: "config-data") pod "e181f1bb-324d-4c85-849e-b6fc65dfc53f" (UID: "e181f1bb-324d-4c85-849e-b6fc65dfc53f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.778624 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktbwx\" (UniqueName: \"kubernetes.io/projected/e181f1bb-324d-4c85-849e-b6fc65dfc53f-kube-api-access-ktbwx\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.778654 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e181f1bb-324d-4c85-849e-b6fc65dfc53f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:34 crc kubenswrapper[4875]: I0130 17:31:34.994056 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.082451 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0392f69a-9df6-49a5-b17a-0d39c748d83c-config-data\") pod \"0392f69a-9df6-49a5-b17a-0d39c748d83c\" (UID: \"0392f69a-9df6-49a5-b17a-0d39c748d83c\") " Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.082928 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87cz6\" (UniqueName: \"kubernetes.io/projected/0392f69a-9df6-49a5-b17a-0d39c748d83c-kube-api-access-87cz6\") pod \"0392f69a-9df6-49a5-b17a-0d39c748d83c\" (UID: \"0392f69a-9df6-49a5-b17a-0d39c748d83c\") " Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.086749 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0392f69a-9df6-49a5-b17a-0d39c748d83c-kube-api-access-87cz6" (OuterVolumeSpecName: "kube-api-access-87cz6") pod "0392f69a-9df6-49a5-b17a-0d39c748d83c" (UID: "0392f69a-9df6-49a5-b17a-0d39c748d83c"). InnerVolumeSpecName "kube-api-access-87cz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.121904 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0392f69a-9df6-49a5-b17a-0d39c748d83c-config-data" (OuterVolumeSpecName: "config-data") pod "0392f69a-9df6-49a5-b17a-0d39c748d83c" (UID: "0392f69a-9df6-49a5-b17a-0d39c748d83c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.188860 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87cz6\" (UniqueName: \"kubernetes.io/projected/0392f69a-9df6-49a5-b17a-0d39c748d83c-kube-api-access-87cz6\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.188922 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0392f69a-9df6-49a5-b17a-0d39c748d83c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.395885 4875 generic.go:334] "Generic (PLEG): container finished" podID="7d7f3ab2-0758-4f19-8786-5d9cf4262bbe" containerID="bbfa23785eb18fe9d0fd851a0d2655426dfe59eae7a4164d11eb6912e983cb47" exitCode=0 Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.395947 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" event={"ID":"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe","Type":"ContainerDied","Data":"bbfa23785eb18fe9d0fd851a0d2655426dfe59eae7a4164d11eb6912e983cb47"} Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.397403 4875 generic.go:334] "Generic (PLEG): container finished" podID="6a7194d0-d476-4ada-8048-9e3366650bdd" containerID="255aef2a1011cac29ec4a3195419ccb6464779ea8efb5b71a779497949cb44d4" exitCode=0 Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.397449 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" event={"ID":"6a7194d0-d476-4ada-8048-9e3366650bdd","Type":"ContainerDied","Data":"255aef2a1011cac29ec4a3195419ccb6464779ea8efb5b71a779497949cb44d4"} Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.398996 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"0392f69a-9df6-49a5-b17a-0d39c748d83c","Type":"ContainerDied","Data":"fa2fed679eb1c11ae5a4d4594219eadce6e1b1d81b4a542514ace275c9aceb20"} Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.399020 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.399041 4875 scope.go:117] "RemoveContainer" containerID="c2e3d0678d4406ae037d5beef2b160fe10554e77d885be2da8152e7c88d62dba" Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.400561 4875 generic.go:334] "Generic (PLEG): container finished" podID="8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76" containerID="14c44674ed3f3726b851288b88991b9bbb5d77f52fb0bcc14b14b104a80d17f8" exitCode=0 Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.400631 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" event={"ID":"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76","Type":"ContainerDied","Data":"14c44674ed3f3726b851288b88991b9bbb5d77f52fb0bcc14b14b104a80d17f8"} Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.403901 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e181f1bb-324d-4c85-849e-b6fc65dfc53f","Type":"ContainerDied","Data":"c0fd7179c66db15bd9fbf93889c68968eedc9587f07caa65749604895be0f73a"} Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.403998 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.443609 4875 scope.go:117] "RemoveContainer" containerID="4bdbdee48d08073c023c397cc05590bdba1d67c794457d6d5ad51de7fee4ca6a" Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.479384 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.485229 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.490598 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.495687 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 30 17:31:35 crc kubenswrapper[4875]: E0130 17:31:35.547475 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:31:35 crc kubenswrapper[4875]: E0130 17:31:35.548820 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:31:35 crc kubenswrapper[4875]: E0130 17:31:35.550064 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 30 17:31:35 crc kubenswrapper[4875]: E0130 17:31:35.550105 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:31:35 crc kubenswrapper[4875]: E0130 17:31:35.871878 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:35 crc kubenswrapper[4875]: E0130 17:31:35.873212 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:35 crc kubenswrapper[4875]: E0130 17:31:35.874659 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:35 crc kubenswrapper[4875]: E0130 17:31:35.874698 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.967010 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.214:8775/\": read tcp 10.217.0.2:50800->10.217.0.214:8775: read: connection reset by peer" Jan 30 17:31:35 crc kubenswrapper[4875]: I0130 17:31:35.967045 4875 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.214:8775/\": read tcp 10.217.0.2:50812->10.217.0.214:8775: read: connection reset by peer" Jan 30 17:31:36 crc kubenswrapper[4875]: E0130 17:31:36.002256 4875 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 30 17:31:36 crc kubenswrapper[4875]: E0130 17:31:36.002346 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data podName:42bccab8-df28-43d4-92ae-d27a388ae8e4 nodeName:}" failed. No retries permitted until 2026-01-30 17:31:40.002323745 +0000 UTC m=+2110.549687128 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "42bccab8-df28-43d4-92ae-d27a388ae8e4") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 30 17:31:36 crc kubenswrapper[4875]: I0130 17:31:36.147394 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0392f69a-9df6-49a5-b17a-0d39c748d83c" path="/var/lib/kubelet/pods/0392f69a-9df6-49a5-b17a-0d39c748d83c/volumes" Jan 30 17:31:36 crc kubenswrapper[4875]: I0130 17:31:36.148055 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5452c976-86c4-4bc8-8610-f33467f8715c" path="/var/lib/kubelet/pods/5452c976-86c4-4bc8-8610-f33467f8715c/volumes" Jan 30 17:31:36 crc kubenswrapper[4875]: I0130 17:31:36.148505 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e181f1bb-324d-4c85-849e-b6fc65dfc53f" path="/var/lib/kubelet/pods/e181f1bb-324d-4c85-849e-b6fc65dfc53f/volumes" Jan 30 17:31:36 crc kubenswrapper[4875]: I0130 17:31:36.419295 4875 generic.go:334] "Generic (PLEG): container finished" podID="11253748-5fbe-477b-8d14-754cce765ecf" containerID="6c5d45d00a1881590cebeb7367fd29414e7246088ce355ebe563b816296a0f91" exitCode=0 Jan 30 17:31:36 crc kubenswrapper[4875]: I0130 17:31:36.419409 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"11253748-5fbe-477b-8d14-754cce765ecf","Type":"ContainerDied","Data":"6c5d45d00a1881590cebeb7367fd29414e7246088ce355ebe563b816296a0f91"} Jan 30 17:31:36 crc kubenswrapper[4875]: I0130 17:31:36.426008 4875 generic.go:334] "Generic (PLEG): container finished" podID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerID="32acfdf77c301f79b044ad2dc8e01ccddcea144d0c8e3cfdd4cbdcc4e03870e0" exitCode=0 Jan 30 17:31:36 crc kubenswrapper[4875]: I0130 17:31:36.426183 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5b4178bb-44e0-4346-a26a-de1835e64c11","Type":"ContainerDied","Data":"32acfdf77c301f79b044ad2dc8e01ccddcea144d0c8e3cfdd4cbdcc4e03870e0"} Jan 30 17:31:36 crc kubenswrapper[4875]: I0130 17:31:36.755993 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" Jan 30 17:31:36 crc kubenswrapper[4875]: I0130 17:31:36.914259 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk7w9\" (UniqueName: \"kubernetes.io/projected/8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76-kube-api-access-rk7w9\") pod \"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76\" (UID: \"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76\") " Jan 30 17:31:36 crc kubenswrapper[4875]: I0130 17:31:36.914351 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76-operator-scripts\") pod \"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76\" (UID: \"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76\") " Jan 30 17:31:36 crc kubenswrapper[4875]: I0130 17:31:36.915461 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76" (UID: "8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:31:36 crc kubenswrapper[4875]: I0130 17:31:36.921984 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76-kube-api-access-rk7w9" (OuterVolumeSpecName: "kube-api-access-rk7w9") pod "8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76" (UID: "8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76"). InnerVolumeSpecName "kube-api-access-rk7w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.009324 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.050397 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk7w9\" (UniqueName: \"kubernetes.io/projected/8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76-kube-api-access-rk7w9\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.050434 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.057075 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.067183 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.151409 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5p57\" (UniqueName: \"kubernetes.io/projected/7d7f3ab2-0758-4f19-8786-5d9cf4262bbe-kube-api-access-n5p57\") pod \"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe\" (UID: \"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe\") " Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.151699 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d7f3ab2-0758-4f19-8786-5d9cf4262bbe-operator-scripts\") pod \"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe\" (UID: \"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe\") " Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.152381 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d7f3ab2-0758-4f19-8786-5d9cf4262bbe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d7f3ab2-0758-4f19-8786-5d9cf4262bbe" (UID: "7d7f3ab2-0758-4f19-8786-5d9cf4262bbe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.154597 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d7f3ab2-0758-4f19-8786-5d9cf4262bbe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.155661 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d7f3ab2-0758-4f19-8786-5d9cf4262bbe-kube-api-access-n5p57" (OuterVolumeSpecName: "kube-api-access-n5p57") pod "7d7f3ab2-0758-4f19-8786-5d9cf4262bbe" (UID: "7d7f3ab2-0758-4f19-8786-5d9cf4262bbe"). InnerVolumeSpecName "kube-api-access-n5p57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.207636 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.255143 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4msz\" (UniqueName: \"kubernetes.io/projected/6a7194d0-d476-4ada-8048-9e3366650bdd-kube-api-access-j4msz\") pod \"6a7194d0-d476-4ada-8048-9e3366650bdd\" (UID: \"6a7194d0-d476-4ada-8048-9e3366650bdd\") " Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.255192 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7194d0-d476-4ada-8048-9e3366650bdd-operator-scripts\") pod \"6a7194d0-d476-4ada-8048-9e3366650bdd\" (UID: \"6a7194d0-d476-4ada-8048-9e3366650bdd\") " Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.255215 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11253748-5fbe-477b-8d14-754cce765ecf-config-data\") pod \"11253748-5fbe-477b-8d14-754cce765ecf\" (UID: \"11253748-5fbe-477b-8d14-754cce765ecf\") " Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.255336 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5m7c\" (UniqueName: \"kubernetes.io/projected/11253748-5fbe-477b-8d14-754cce765ecf-kube-api-access-v5m7c\") pod \"11253748-5fbe-477b-8d14-754cce765ecf\" (UID: \"11253748-5fbe-477b-8d14-754cce765ecf\") " Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.255402 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11253748-5fbe-477b-8d14-754cce765ecf-logs\") pod \"11253748-5fbe-477b-8d14-754cce765ecf\" (UID: \"11253748-5fbe-477b-8d14-754cce765ecf\") " Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.255604 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7194d0-d476-4ada-8048-9e3366650bdd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a7194d0-d476-4ada-8048-9e3366650bdd" (UID: "6a7194d0-d476-4ada-8048-9e3366650bdd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.255755 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5p57\" (UniqueName: \"kubernetes.io/projected/7d7f3ab2-0758-4f19-8786-5d9cf4262bbe-kube-api-access-n5p57\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.255770 4875 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7194d0-d476-4ada-8048-9e3366650bdd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.256287 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11253748-5fbe-477b-8d14-754cce765ecf-logs" (OuterVolumeSpecName: "logs") pod "11253748-5fbe-477b-8d14-754cce765ecf" (UID: "11253748-5fbe-477b-8d14-754cce765ecf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.258693 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11253748-5fbe-477b-8d14-754cce765ecf-kube-api-access-v5m7c" (OuterVolumeSpecName: "kube-api-access-v5m7c") pod "11253748-5fbe-477b-8d14-754cce765ecf" (UID: "11253748-5fbe-477b-8d14-754cce765ecf"). InnerVolumeSpecName "kube-api-access-v5m7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.259666 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7194d0-d476-4ada-8048-9e3366650bdd-kube-api-access-j4msz" (OuterVolumeSpecName: "kube-api-access-j4msz") pod "6a7194d0-d476-4ada-8048-9e3366650bdd" (UID: "6a7194d0-d476-4ada-8048-9e3366650bdd"). InnerVolumeSpecName "kube-api-access-j4msz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.282809 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11253748-5fbe-477b-8d14-754cce765ecf-config-data" (OuterVolumeSpecName: "config-data") pod "11253748-5fbe-477b-8d14-754cce765ecf" (UID: "11253748-5fbe-477b-8d14-754cce765ecf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.356456 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b4178bb-44e0-4346-a26a-de1835e64c11-logs\") pod \"5b4178bb-44e0-4346-a26a-de1835e64c11\" (UID: \"5b4178bb-44e0-4346-a26a-de1835e64c11\") " Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.356539 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b4178bb-44e0-4346-a26a-de1835e64c11-config-data\") pod \"5b4178bb-44e0-4346-a26a-de1835e64c11\" (UID: \"5b4178bb-44e0-4346-a26a-de1835e64c11\") " Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.356663 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr8kx\" (UniqueName: \"kubernetes.io/projected/5b4178bb-44e0-4346-a26a-de1835e64c11-kube-api-access-hr8kx\") pod \"5b4178bb-44e0-4346-a26a-de1835e64c11\" (UID: \"5b4178bb-44e0-4346-a26a-de1835e64c11\") " Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.357152 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5m7c\" (UniqueName: \"kubernetes.io/projected/11253748-5fbe-477b-8d14-754cce765ecf-kube-api-access-v5m7c\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.357179 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11253748-5fbe-477b-8d14-754cce765ecf-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.357193 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4msz\" (UniqueName: \"kubernetes.io/projected/6a7194d0-d476-4ada-8048-9e3366650bdd-kube-api-access-j4msz\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.357205 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11253748-5fbe-477b-8d14-754cce765ecf-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.357145 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b4178bb-44e0-4346-a26a-de1835e64c11-logs" (OuterVolumeSpecName: "logs") pod "5b4178bb-44e0-4346-a26a-de1835e64c11" (UID: "5b4178bb-44e0-4346-a26a-de1835e64c11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.376511 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b4178bb-44e0-4346-a26a-de1835e64c11-kube-api-access-hr8kx" (OuterVolumeSpecName: "kube-api-access-hr8kx") pod "5b4178bb-44e0-4346-a26a-de1835e64c11" (UID: "5b4178bb-44e0-4346-a26a-de1835e64c11"). InnerVolumeSpecName "kube-api-access-hr8kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.400129 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b4178bb-44e0-4346-a26a-de1835e64c11-config-data" (OuterVolumeSpecName: "config-data") pod "5b4178bb-44e0-4346-a26a-de1835e64c11" (UID: "5b4178bb-44e0-4346-a26a-de1835e64c11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.442810 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"11253748-5fbe-477b-8d14-754cce765ecf","Type":"ContainerDied","Data":"b91eaa77dba865a3efa09f185286e9351df3e044eab7046029b5f1d89d0d5b93"} Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.442866 4875 scope.go:117] "RemoveContainer" containerID="6c5d45d00a1881590cebeb7367fd29414e7246088ce355ebe563b816296a0f91" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.442957 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.448662 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" event={"ID":"8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76","Type":"ContainerDied","Data":"27f98a645443f9187c1742661ff0f1f0775c91b012973aeee22634afb7518dd2"} Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.448825 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27f98a645443f9187c1742661ff0f1f0775c91b012973aeee22634afb7518dd2" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.448787 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapic51e-account-delete-jr25c" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.460221 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr8kx\" (UniqueName: \"kubernetes.io/projected/5b4178bb-44e0-4346-a26a-de1835e64c11-kube-api-access-hr8kx\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.460274 4875 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b4178bb-44e0-4346-a26a-de1835e64c11-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.460289 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b4178bb-44e0-4346-a26a-de1835e64c11-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.469401 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" event={"ID":"7d7f3ab2-0758-4f19-8786-5d9cf4262bbe","Type":"ContainerDied","Data":"b42ec382ca845c3f2b7570485db076487ee9563cb5f1ddf8a3d188f41307b746"} Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.469459 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42ec382ca845c3f2b7570485db076487ee9563cb5f1ddf8a3d188f41307b746" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.469572 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0b6ba-account-delete-nxvcr" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.473007 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" event={"ID":"6a7194d0-d476-4ada-8048-9e3366650bdd","Type":"ContainerDied","Data":"7f9b712ba335d9a257caad59044fbacc0fc91b4df814c78bafb1c783f600095d"} Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.473186 4875 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f9b712ba335d9a257caad59044fbacc0fc91b4df814c78bafb1c783f600095d" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.473328 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1717a-account-delete-bsmr5" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.478358 4875 scope.go:117] "RemoveContainer" containerID="b374cfe3fc2bcfcd29e5574f76e7cedf99dfb8bd44a0ca3549589323ee84f9cf" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.484499 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5b4178bb-44e0-4346-a26a-de1835e64c11","Type":"ContainerDied","Data":"a139fd48faf53050fce36f35f4286365f4c34e9167fe1de39a867c4a158b87f1"} Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.484658 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.544042 4875 scope.go:117] "RemoveContainer" containerID="32acfdf77c301f79b044ad2dc8e01ccddcea144d0c8e3cfdd4cbdcc4e03870e0" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.548089 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.559742 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.567890 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.581898 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.589115 4875 scope.go:117] "RemoveContainer" containerID="5cab9fe7b3bab5032944f6c000616458aaca867775a7fc55b021104df998a0dc" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.591729 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-qzfg8"] Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.607642 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-qzfg8"] Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.618157 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx"] Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.627697 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell1717a-account-delete-bsmr5"] Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.636549 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-717a-account-create-update-xn9fx"] Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.656932 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell1717a-account-delete-bsmr5"] Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.852987 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.966783 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e-config-data\") pod \"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e\" (UID: \"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e\") " Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.966979 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z2kg\" (UniqueName: \"kubernetes.io/projected/69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e-kube-api-access-4z2kg\") pod \"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e\" (UID: \"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e\") " Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.972536 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e-kube-api-access-4z2kg" (OuterVolumeSpecName: "kube-api-access-4z2kg") pod "69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e" (UID: "69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e"). InnerVolumeSpecName "kube-api-access-4z2kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:37 crc kubenswrapper[4875]: I0130 17:31:37.987872 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e-config-data" (OuterVolumeSpecName: "config-data") pod "69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e" (UID: "69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.069025 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.069052 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z2kg\" (UniqueName: \"kubernetes.io/projected/69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e-kube-api-access-4z2kg\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.122478 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-b6888cc46-89gfr_e95a5815-f333-496a-a3cc-e568c1ded6ba/keystone-api/0.log" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.145266 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06535175-24df-4a19-8892-9936345a6338" path="/var/lib/kubelet/pods/06535175-24df-4a19-8892-9936345a6338/volumes" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.145827 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11253748-5fbe-477b-8d14-754cce765ecf" path="/var/lib/kubelet/pods/11253748-5fbe-477b-8d14-754cce765ecf/volumes" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.146418 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b4178bb-44e0-4346-a26a-de1835e64c11" path="/var/lib/kubelet/pods/5b4178bb-44e0-4346-a26a-de1835e64c11/volumes" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.147468 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7194d0-d476-4ada-8048-9e3366650bdd" path="/var/lib/kubelet/pods/6a7194d0-d476-4ada-8048-9e3366650bdd/volumes" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.147966 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1e3597d-60b2-4556-9cf0-994b868f6fa2" path="/var/lib/kubelet/pods/b1e3597d-60b2-4556-9cf0-994b868f6fa2/volumes" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.498446 4875 generic.go:334] "Generic (PLEG): container finished" podID="69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e" containerID="4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd" exitCode=0 Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.498492 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e","Type":"ContainerDied","Data":"4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd"} Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.498523 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e","Type":"ContainerDied","Data":"49da033ffb4f049a5e0cb5e2aa5a09706c3b0c52fe0944f6b1f03658d40bef6a"} Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.498541 4875 scope.go:117] "RemoveContainer" containerID="4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.498612 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.521222 4875 scope.go:117] "RemoveContainer" containerID="4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.521846 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:31:38 crc kubenswrapper[4875]: E0130 17:31:38.521891 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd\": container with ID starting with 4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd not found: ID does not exist" containerID="4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.521925 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd"} err="failed to get container status \"4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd\": rpc error: code = NotFound desc = could not find container \"4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd\": container with ID starting with 4965f718e7f9d24a7e66c5d790455fa56f6c4ce4f1eb447be90dcf6bed1069dd not found: ID does not exist" Jan 30 17:31:38 crc kubenswrapper[4875]: I0130 17:31:38.529377 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 30 17:31:40 crc kubenswrapper[4875]: E0130 17:31:40.097359 4875 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 30 17:31:40 crc kubenswrapper[4875]: E0130 17:31:40.097435 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data podName:42bccab8-df28-43d4-92ae-d27a388ae8e4 nodeName:}" failed. No retries permitted until 2026-01-30 17:31:48.097416063 +0000 UTC m=+2118.644779446 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "42bccab8-df28-43d4-92ae-d27a388ae8e4") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 30 17:31:40 crc kubenswrapper[4875]: I0130 17:31:40.159103 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e" path="/var/lib/kubelet/pods/69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e/volumes" Jan 30 17:31:40 crc kubenswrapper[4875]: I0130 17:31:40.320829 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_e387e78d-25ab-454b-9b66-d2cc13abe676/memcached/0.log" Jan 30 17:31:40 crc kubenswrapper[4875]: I0130 17:31:40.827838 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-c51e-account-create-update-gcjcc_95fc551d-b330-4816-9166-fa1e6f145e90/mariadb-account-create-update/0.log" Jan 30 17:31:40 crc kubenswrapper[4875]: E0130 17:31:40.924501 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:40 crc kubenswrapper[4875]: E0130 17:31:40.926840 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:40 crc kubenswrapper[4875]: E0130 17:31:40.928131 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:40 crc kubenswrapper[4875]: E0130 17:31:40.928195 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:31:41 crc kubenswrapper[4875]: I0130 17:31:41.325019 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-db-create-nqjhm_0ffccfaf-adf6-49e9-a626-b81376554127/mariadb-database-create/0.log" Jan 30 17:31:41 crc kubenswrapper[4875]: I0130 17:31:41.930970 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-b6ba-account-create-update-5tfhn_bb1a954f-6cce-4ab8-b878-de0c48e9a80d/mariadb-account-create-update/0.log" Jan 30 17:31:42 crc kubenswrapper[4875]: I0130 17:31:42.367598 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-8dsds"] Jan 30 17:31:42 crc kubenswrapper[4875]: I0130 17:31:42.378679 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn"] Jan 30 17:31:42 crc kubenswrapper[4875]: I0130 17:31:42.388137 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell0b6ba-account-delete-nxvcr"] Jan 30 17:31:42 crc kubenswrapper[4875]: I0130 17:31:42.395149 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-8dsds"] Jan 30 17:31:42 crc kubenswrapper[4875]: I0130 17:31:42.399886 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-b6ba-account-create-update-5tfhn"] Jan 30 17:31:42 crc kubenswrapper[4875]: I0130 17:31:42.404465 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell0b6ba-account-delete-nxvcr"] Jan 30 17:31:42 crc kubenswrapper[4875]: I0130 17:31:42.454364 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-nqjhm"] Jan 30 17:31:42 crc kubenswrapper[4875]: I0130 17:31:42.459948 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-nqjhm"] Jan 30 17:31:42 crc kubenswrapper[4875]: I0130 17:31:42.469136 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc"] Jan 30 17:31:42 crc kubenswrapper[4875]: I0130 17:31:42.474254 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapic51e-account-delete-jr25c"] Jan 30 17:31:42 crc kubenswrapper[4875]: I0130 17:31:42.482802 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-c51e-account-create-update-gcjcc"] Jan 30 17:31:42 crc kubenswrapper[4875]: I0130 17:31:42.488599 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapic51e-account-delete-jr25c"] Jan 30 17:31:44 crc kubenswrapper[4875]: I0130 17:31:44.144720 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ffccfaf-adf6-49e9-a626-b81376554127" path="/var/lib/kubelet/pods/0ffccfaf-adf6-49e9-a626-b81376554127/volumes" Jan 30 17:31:44 crc kubenswrapper[4875]: I0130 17:31:44.145670 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d7f3ab2-0758-4f19-8786-5d9cf4262bbe" path="/var/lib/kubelet/pods/7d7f3ab2-0758-4f19-8786-5d9cf4262bbe/volumes" Jan 30 17:31:44 crc kubenswrapper[4875]: I0130 17:31:44.146162 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76" path="/var/lib/kubelet/pods/8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76/volumes" Jan 30 17:31:44 crc kubenswrapper[4875]: I0130 17:31:44.146630 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fc551d-b330-4816-9166-fa1e6f145e90" path="/var/lib/kubelet/pods/95fc551d-b330-4816-9166-fa1e6f145e90/volumes" Jan 30 17:31:44 crc kubenswrapper[4875]: I0130 17:31:44.147621 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb1a954f-6cce-4ab8-b878-de0c48e9a80d" path="/var/lib/kubelet/pods/bb1a954f-6cce-4ab8-b878-de0c48e9a80d/volumes" Jan 30 17:31:44 crc kubenswrapper[4875]: I0130 17:31:44.148126 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5eadec5-b07e-4825-ad38-c41990e4ad98" path="/var/lib/kubelet/pods/e5eadec5-b07e-4825-ad38-c41990e4ad98/volumes" Jan 30 17:31:44 crc kubenswrapper[4875]: I0130 17:31:44.562141 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-compute-fake1-compute-0_42bccab8-df28-43d4-92ae-d27a388ae8e4/nova-kuttl-cell1-compute-fake1-compute-compute/2.log" Jan 30 17:31:45 crc kubenswrapper[4875]: E0130 17:31:45.872487 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:45 crc kubenswrapper[4875]: E0130 17:31:45.874006 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:45 crc kubenswrapper[4875]: E0130 17:31:45.875242 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:45 crc kubenswrapper[4875]: E0130 17:31:45.875283 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:31:46 crc kubenswrapper[4875]: I0130 17:31:46.524731 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_83732f39-75fd-4817-be96-f954dcc5fd96/galera/0.log" Jan 30 17:31:46 crc kubenswrapper[4875]: I0130 17:31:46.935421 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_2651f38f-c3ae-4970-ab34-7b9540d5aa24/galera/0.log" Jan 30 17:31:47 crc kubenswrapper[4875]: I0130 17:31:47.337755 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_c4f3c910-b4f4-40cf-bf87-aabb54bb76c3/openstackclient/0.log" Jan 30 17:31:47 crc kubenswrapper[4875]: I0130 17:31:47.798934 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-696447b7b-gwj9q_2e4060f6-e91b-4f67-b959-9e2a125c05d3/placement-log/0.log" Jan 30 17:31:48 crc kubenswrapper[4875]: E0130 17:31:48.120722 4875 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 30 17:31:48 crc kubenswrapper[4875]: E0130 17:31:48.120815 4875 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data podName:42bccab8-df28-43d4-92ae-d27a388ae8e4 nodeName:}" failed. No retries permitted until 2026-01-30 17:32:04.120792117 +0000 UTC m=+2134.668155500 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "42bccab8-df28-43d4-92ae-d27a388ae8e4") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 30 17:31:48 crc kubenswrapper[4875]: I0130 17:31:48.251856 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_2d4b13af-d4ec-458c-b3a9-e060171110f6/rabbitmq/0.log" Jan 30 17:31:48 crc kubenswrapper[4875]: I0130 17:31:48.651003 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_b6ee4eec-358c-45f7-9b1a-143de69b2929/rabbitmq/0.log" Jan 30 17:31:49 crc kubenswrapper[4875]: I0130 17:31:49.089438 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_e75a0606-ea82-4ab9-8245-feb3105a23ba/rabbitmq/0.log" Jan 30 17:31:50 crc kubenswrapper[4875]: E0130 17:31:50.872442 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:50 crc kubenswrapper[4875]: E0130 17:31:50.875884 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:50 crc kubenswrapper[4875]: E0130 17:31:50.877478 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:50 crc kubenswrapper[4875]: E0130 17:31:50.877529 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:31:55 crc kubenswrapper[4875]: E0130 17:31:55.872754 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:55 crc kubenswrapper[4875]: E0130 17:31:55.875487 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:55 crc kubenswrapper[4875]: E0130 17:31:55.876711 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:31:55 crc kubenswrapper[4875]: E0130 17:31:55.876752 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:32:00 crc kubenswrapper[4875]: E0130 17:32:00.871403 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:32:00 crc kubenswrapper[4875]: E0130 17:32:00.875017 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:32:00 crc kubenswrapper[4875]: E0130 17:32:00.876491 4875 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 30 17:32:00 crc kubenswrapper[4875]: E0130 17:32:00.876568 4875 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:32:03 crc kubenswrapper[4875]: I0130 17:32:03.705407 4875 generic.go:334] "Generic (PLEG): container finished" podID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" exitCode=137 Jan 30 17:32:03 crc kubenswrapper[4875]: I0130 17:32:03.705766 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"42bccab8-df28-43d4-92ae-d27a388ae8e4","Type":"ContainerDied","Data":"011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f"} Jan 30 17:32:03 crc kubenswrapper[4875]: I0130 17:32:03.705990 4875 scope.go:117] "RemoveContainer" containerID="f1b7e7ee344be54533f85e2122d5409d002f007d9864920a4c3de0ea21b6c1c4" Jan 30 17:32:03 crc kubenswrapper[4875]: I0130 17:32:03.771702 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:32:03 crc kubenswrapper[4875]: I0130 17:32:03.969288 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j85qp\" (UniqueName: \"kubernetes.io/projected/42bccab8-df28-43d4-92ae-d27a388ae8e4-kube-api-access-j85qp\") pod \"42bccab8-df28-43d4-92ae-d27a388ae8e4\" (UID: \"42bccab8-df28-43d4-92ae-d27a388ae8e4\") " Jan 30 17:32:03 crc kubenswrapper[4875]: I0130 17:32:03.969353 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data\") pod \"42bccab8-df28-43d4-92ae-d27a388ae8e4\" (UID: \"42bccab8-df28-43d4-92ae-d27a388ae8e4\") " Jan 30 17:32:03 crc kubenswrapper[4875]: I0130 17:32:03.983916 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42bccab8-df28-43d4-92ae-d27a388ae8e4-kube-api-access-j85qp" (OuterVolumeSpecName: "kube-api-access-j85qp") pod "42bccab8-df28-43d4-92ae-d27a388ae8e4" (UID: "42bccab8-df28-43d4-92ae-d27a388ae8e4"). InnerVolumeSpecName "kube-api-access-j85qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:32:03 crc kubenswrapper[4875]: I0130 17:32:03.999553 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data" (OuterVolumeSpecName: "config-data") pod "42bccab8-df28-43d4-92ae-d27a388ae8e4" (UID: "42bccab8-df28-43d4-92ae-d27a388ae8e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:32:04 crc kubenswrapper[4875]: I0130 17:32:04.070763 4875 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42bccab8-df28-43d4-92ae-d27a388ae8e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:04 crc kubenswrapper[4875]: I0130 17:32:04.070792 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j85qp\" (UniqueName: \"kubernetes.io/projected/42bccab8-df28-43d4-92ae-d27a388ae8e4-kube-api-access-j85qp\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:04 crc kubenswrapper[4875]: I0130 17:32:04.719448 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"42bccab8-df28-43d4-92ae-d27a388ae8e4","Type":"ContainerDied","Data":"af06ce4a796a8183f48a75f6f8a7e0be1340a9a91eefff695ca9547f48fb9016"} Jan 30 17:32:04 crc kubenswrapper[4875]: I0130 17:32:04.719899 4875 scope.go:117] "RemoveContainer" containerID="011b18e3c89b646b2767c6ff7cc742c261a73a5c58e80dc3e944c7be4814c18f" Jan 30 17:32:04 crc kubenswrapper[4875]: I0130 17:32:04.719493 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 30 17:32:04 crc kubenswrapper[4875]: I0130 17:32:04.755815 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 30 17:32:04 crc kubenswrapper[4875]: I0130 17:32:04.760877 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 30 17:32:06 crc kubenswrapper[4875]: I0130 17:32:06.145173 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" path="/var/lib/kubelet/pods/42bccab8-df28-43d4-92ae-d27a388ae8e4/volumes" Jan 30 17:32:18 crc kubenswrapper[4875]: I0130 17:32:18.914314 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n_7390a607-60b7-4f18-af7a-b4391c97a01f/extract/0.log" Jan 30 17:32:19 crc kubenswrapper[4875]: I0130 17:32:19.304240 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-mjlwh_be56ef14-c793-4e0a-82bb-4e29b4182e22/manager/0.log" Jan 30 17:32:19 crc kubenswrapper[4875]: I0130 17:32:19.711137 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd_f5b461b0-718a-4065-bf1d-db2860d2af04/extract/0.log" Jan 30 17:32:20 crc kubenswrapper[4875]: I0130 17:32:20.149004 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-dm9v4_4d112d50-a873-440f-b366-332c135cd9cf/manager/0.log" Jan 30 17:32:20 crc kubenswrapper[4875]: I0130 17:32:20.546629 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-znpxc_daa61e94-524b-445a-8086-63a4a3db6764/manager/0.log" Jan 30 17:32:20 crc kubenswrapper[4875]: I0130 17:32:20.964917 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-gbhbx_89036e1f-6293-456d-ae24-6a52b2a102d9/manager/0.log" Jan 30 17:32:21 crc kubenswrapper[4875]: I0130 17:32:21.340834 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-bvnzf_d6508139-1b0b-45c7-b307-901c0903370f/manager/0.log" Jan 30 17:32:21 crc kubenswrapper[4875]: I0130 17:32:21.707748 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-fpcz4_14395019-dadc-4326-8a88-3f8746438a60/manager/0.log" Jan 30 17:32:22 crc kubenswrapper[4875]: I0130 17:32:22.242298 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-frg6k_9a2f99f7-889a-4847-88f0-3241c2fa3353/manager/0.log" Jan 30 17:32:22 crc kubenswrapper[4875]: I0130 17:32:22.610548 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-fdmpd_792a5bfa-13bb-4e86-ab45-09dd184fcab3/manager/0.log" Jan 30 17:32:23 crc kubenswrapper[4875]: I0130 17:32:23.040011 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-cpvgb_1a65b1f7-9d89-4a8b-9af9-811495df5c5f/manager/0.log" Jan 30 17:32:23 crc kubenswrapper[4875]: I0130 17:32:23.440454 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-nzlnv_408af5cb-dfce-44ff-9b25-5378f194196f/manager/0.log" Jan 30 17:32:23 crc kubenswrapper[4875]: I0130 17:32:23.889813 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-d74js_a8c14e5e-0827-45c6-8e21-c524ad39fb11/manager/0.log" Jan 30 17:32:24 crc kubenswrapper[4875]: I0130 17:32:24.297055 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-w75bt_972271b3-306a-4015-be23-c1320e0c296e/manager/0.log" Jan 30 17:32:25 crc kubenswrapper[4875]: I0130 17:32:25.092978 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-64bd9bf7b6-llx69_bdc3f51f-4dc1-45bd-b26d-1cacf01f9097/manager/0.log" Jan 30 17:32:25 crc kubenswrapper[4875]: I0130 17:32:25.482261 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-pd8bb_87fba6ee-2538-48b8-8a3d-cdd9308305a6/registry-server/0.log" Jan 30 17:32:25 crc kubenswrapper[4875]: I0130 17:32:25.894311 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-h9cpk_044cc22a-35c3-49ac-8c70-80478ce3f670/manager/0.log" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.287145 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.287211 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.289982 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2_59490e66-2646-4a95-9b81-e372fbd2f921/manager/0.log" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.814514 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n6c5t"] Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.815211 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76" containerName="mariadb-account-delete" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815225 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76" containerName="mariadb-account-delete" Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.815242 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815252 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.815267 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11253748-5fbe-477b-8d14-754cce765ecf" containerName="nova-kuttl-api-api" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815275 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="11253748-5fbe-477b-8d14-754cce765ecf" containerName="nova-kuttl-api-api" Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.815290 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e181f1bb-324d-4c85-849e-b6fc65dfc53f" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815298 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="e181f1bb-324d-4c85-849e-b6fc65dfc53f" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.815314 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5452c976-86c4-4bc8-8610-f33467f8715c" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815323 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5452c976-86c4-4bc8-8610-f33467f8715c" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.815339 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerName="nova-kuttl-metadata-log" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815348 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerName="nova-kuttl-metadata-log" Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.815365 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815374 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.815389 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0392f69a-9df6-49a5-b17a-0d39c748d83c" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815398 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="0392f69a-9df6-49a5-b17a-0d39c748d83c" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.815412 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815420 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.815441 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerName="nova-kuttl-metadata-metadata" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815450 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerName="nova-kuttl-metadata-metadata" Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.815466 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11253748-5fbe-477b-8d14-754cce765ecf" containerName="nova-kuttl-api-log" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815474 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="11253748-5fbe-477b-8d14-754cce765ecf" containerName="nova-kuttl-api-log" Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.815486 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7f3ab2-0758-4f19-8786-5d9cf4262bbe" containerName="mariadb-account-delete" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815494 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7f3ab2-0758-4f19-8786-5d9cf4262bbe" containerName="mariadb-account-delete" Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.815511 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7194d0-d476-4ada-8048-9e3366650bdd" containerName="mariadb-account-delete" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815519 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7194d0-d476-4ada-8048-9e3366650bdd" containerName="mariadb-account-delete" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815708 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d54bbcc-f7ab-45b3-9ba1-af09e2f1bf76" containerName="mariadb-account-delete" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815723 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7194d0-d476-4ada-8048-9e3366650bdd" containerName="mariadb-account-delete" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815737 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b49ed4-8de7-45eb-9dd1-ec5e27e4a50e" containerName="nova-kuttl-cell0-conductor-conductor" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815748 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerName="nova-kuttl-metadata-log" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815763 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="e181f1bb-324d-4c85-849e-b6fc65dfc53f" containerName="nova-kuttl-scheduler-scheduler" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815776 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815790 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815800 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="0392f69a-9df6-49a5-b17a-0d39c748d83c" containerName="nova-kuttl-cell1-conductor-conductor" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815815 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4178bb-44e0-4346-a26a-de1835e64c11" containerName="nova-kuttl-metadata-metadata" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815827 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815839 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7f3ab2-0758-4f19-8786-5d9cf4262bbe" containerName="mariadb-account-delete" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815852 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="11253748-5fbe-477b-8d14-754cce765ecf" containerName="nova-kuttl-api-api" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815864 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="11253748-5fbe-477b-8d14-754cce765ecf" containerName="nova-kuttl-api-log" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.815877 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="5452c976-86c4-4bc8-8610-f33467f8715c" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 30 17:32:26 crc kubenswrapper[4875]: E0130 17:32:26.816047 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.816057 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="42bccab8-df28-43d4-92ae-d27a388ae8e4" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.818954 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.824068 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n6c5t"] Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.950359 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-utilities\") pod \"certified-operators-n6c5t\" (UID: \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\") " pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.950399 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-catalog-content\") pod \"certified-operators-n6c5t\" (UID: \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\") " pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:26 crc kubenswrapper[4875]: I0130 17:32:26.950420 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-577r8\" (UniqueName: \"kubernetes.io/projected/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-kube-api-access-577r8\") pod \"certified-operators-n6c5t\" (UID: \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\") " pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.012258 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6f764c8dd-9ntw2_662b188b-86ea-439e-a40b-6284d49e476e/manager/0.log" Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.051254 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-utilities\") pod \"certified-operators-n6c5t\" (UID: \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\") " pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.051298 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-catalog-content\") pod \"certified-operators-n6c5t\" (UID: \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\") " pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.051322 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-577r8\" (UniqueName: \"kubernetes.io/projected/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-kube-api-access-577r8\") pod \"certified-operators-n6c5t\" (UID: \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\") " pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.051855 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-utilities\") pod \"certified-operators-n6c5t\" (UID: \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\") " pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.054553 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-catalog-content\") pod \"certified-operators-n6c5t\" (UID: \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\") " pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.073608 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-577r8\" (UniqueName: \"kubernetes.io/projected/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-kube-api-access-577r8\") pod \"certified-operators-n6c5t\" (UID: \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\") " pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.144274 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.462675 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wdth9_90d2ca44-318f-4c47-8a9e-2781ac1151e6/registry-server/0.log" Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.622401 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n6c5t"] Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.930193 4875 generic.go:334] "Generic (PLEG): container finished" podID="2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" containerID="17445b7302a8e9b9402da4beec82d13d1393a16402060527580784a7a5e880ad" exitCode=0 Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.930260 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6c5t" event={"ID":"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5","Type":"ContainerDied","Data":"17445b7302a8e9b9402da4beec82d13d1393a16402060527580784a7a5e880ad"} Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.930338 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6c5t" event={"ID":"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5","Type":"ContainerStarted","Data":"a940c7077729eb8251da6f2f6eeaa1472e321c81d06e0cf022111db9177e4ec6"} Jan 30 17:32:27 crc kubenswrapper[4875]: I0130 17:32:27.942677 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-xnw72_cefac6c5-5765-4646-a5c1-9832fb0170d6/manager/0.log" Jan 30 17:32:28 crc kubenswrapper[4875]: I0130 17:32:28.388782 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-8dhn6_d3967345-0c3d-431b-8408-3f7beaba730d/manager/0.log" Jan 30 17:32:28 crc kubenswrapper[4875]: I0130 17:32:28.858913 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nj2ss_86be17ce-228e-46ba-84df-5134bdb00c99/operator/0.log" Jan 30 17:32:28 crc kubenswrapper[4875]: I0130 17:32:28.942285 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6c5t" event={"ID":"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5","Type":"ContainerStarted","Data":"5c81278d4adf15d31e029c363f53fd55f6391e618a818df7298fc059c800b767"} Jan 30 17:32:29 crc kubenswrapper[4875]: I0130 17:32:29.240517 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-zxs9g_f4df6dfd-91eb-4d61-93fd-b93e111eb127/manager/0.log" Jan 30 17:32:29 crc kubenswrapper[4875]: I0130 17:32:29.678174 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-ns9pg_921d8e30-00c8-43e3-b44a-4de9e4450ba2/manager/0.log" Jan 30 17:32:29 crc kubenswrapper[4875]: I0130 17:32:29.955195 4875 generic.go:334] "Generic (PLEG): container finished" podID="2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" containerID="5c81278d4adf15d31e029c363f53fd55f6391e618a818df7298fc059c800b767" exitCode=0 Jan 30 17:32:29 crc kubenswrapper[4875]: I0130 17:32:29.955241 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6c5t" event={"ID":"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5","Type":"ContainerDied","Data":"5c81278d4adf15d31e029c363f53fd55f6391e618a818df7298fc059c800b767"} Jan 30 17:32:30 crc kubenswrapper[4875]: I0130 17:32:30.057727 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-ld7cp_128260c8-c860-43f1-acd0-b5d9ed7d3f01/manager/0.log" Jan 30 17:32:30 crc kubenswrapper[4875]: I0130 17:32:30.466189 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-z4fxd_66127cf7-84e7-4bb6-9830-936f7e20586d/manager/0.log" Jan 30 17:32:31 crc kubenswrapper[4875]: I0130 17:32:31.971019 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6c5t" event={"ID":"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5","Type":"ContainerStarted","Data":"421b0ede8925517209f95150f2d215e43addb82043e18d9924ca046c914426a7"} Jan 30 17:32:31 crc kubenswrapper[4875]: I0130 17:32:31.995485 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n6c5t" podStartSLOduration=2.533895409 podStartE2EDuration="5.995464855s" podCreationTimestamp="2026-01-30 17:32:26 +0000 UTC" firstStartedPulling="2026-01-30 17:32:27.931893588 +0000 UTC m=+2158.479257011" lastFinishedPulling="2026-01-30 17:32:31.393463074 +0000 UTC m=+2161.940826457" observedRunningTime="2026-01-30 17:32:31.990457834 +0000 UTC m=+2162.537821227" watchObservedRunningTime="2026-01-30 17:32:31.995464855 +0000 UTC m=+2162.542828238" Jan 30 17:32:35 crc kubenswrapper[4875]: I0130 17:32:35.274898 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-b6888cc46-89gfr_e95a5815-f333-496a-a3cc-e568c1ded6ba/keystone-api/0.log" Jan 30 17:32:37 crc kubenswrapper[4875]: I0130 17:32:37.144474 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:37 crc kubenswrapper[4875]: I0130 17:32:37.146351 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:37 crc kubenswrapper[4875]: I0130 17:32:37.186976 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:37 crc kubenswrapper[4875]: I0130 17:32:37.651283 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_e387e78d-25ab-454b-9b66-d2cc13abe676/memcached/0.log" Jan 30 17:32:38 crc kubenswrapper[4875]: I0130 17:32:38.078847 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:38 crc kubenswrapper[4875]: I0130 17:32:38.128759 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n6c5t"] Jan 30 17:32:38 crc kubenswrapper[4875]: I0130 17:32:38.186359 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_83732f39-75fd-4817-be96-f954dcc5fd96/galera/0.log" Jan 30 17:32:38 crc kubenswrapper[4875]: I0130 17:32:38.705008 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_2651f38f-c3ae-4970-ab34-7b9540d5aa24/galera/0.log" Jan 30 17:32:39 crc kubenswrapper[4875]: I0130 17:32:39.249491 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_c4f3c910-b4f4-40cf-bf87-aabb54bb76c3/openstackclient/0.log" Jan 30 17:32:39 crc kubenswrapper[4875]: I0130 17:32:39.772362 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-696447b7b-gwj9q_2e4060f6-e91b-4f67-b959-9e2a125c05d3/placement-log/0.log" Jan 30 17:32:40 crc kubenswrapper[4875]: I0130 17:32:40.035660 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n6c5t" podUID="2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" containerName="registry-server" containerID="cri-o://421b0ede8925517209f95150f2d215e43addb82043e18d9924ca046c914426a7" gracePeriod=2 Jan 30 17:32:40 crc kubenswrapper[4875]: I0130 17:32:40.293753 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_2d4b13af-d4ec-458c-b3a9-e060171110f6/rabbitmq/0.log" Jan 30 17:32:40 crc kubenswrapper[4875]: I0130 17:32:40.842390 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_b6ee4eec-358c-45f7-9b1a-143de69b2929/rabbitmq/0.log" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.011251 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.045047 4875 generic.go:334] "Generic (PLEG): container finished" podID="2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" containerID="421b0ede8925517209f95150f2d215e43addb82043e18d9924ca046c914426a7" exitCode=0 Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.045080 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6c5t" event={"ID":"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5","Type":"ContainerDied","Data":"421b0ede8925517209f95150f2d215e43addb82043e18d9924ca046c914426a7"} Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.045104 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n6c5t" event={"ID":"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5","Type":"ContainerDied","Data":"a940c7077729eb8251da6f2f6eeaa1472e321c81d06e0cf022111db9177e4ec6"} Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.045121 4875 scope.go:117] "RemoveContainer" containerID="421b0ede8925517209f95150f2d215e43addb82043e18d9924ca046c914426a7" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.045231 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n6c5t" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.072081 4875 scope.go:117] "RemoveContainer" containerID="5c81278d4adf15d31e029c363f53fd55f6391e618a818df7298fc059c800b767" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.090496 4875 scope.go:117] "RemoveContainer" containerID="17445b7302a8e9b9402da4beec82d13d1393a16402060527580784a7a5e880ad" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.115914 4875 scope.go:117] "RemoveContainer" containerID="421b0ede8925517209f95150f2d215e43addb82043e18d9924ca046c914426a7" Jan 30 17:32:41 crc kubenswrapper[4875]: E0130 17:32:41.116376 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"421b0ede8925517209f95150f2d215e43addb82043e18d9924ca046c914426a7\": container with ID starting with 421b0ede8925517209f95150f2d215e43addb82043e18d9924ca046c914426a7 not found: ID does not exist" containerID="421b0ede8925517209f95150f2d215e43addb82043e18d9924ca046c914426a7" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.116407 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"421b0ede8925517209f95150f2d215e43addb82043e18d9924ca046c914426a7"} err="failed to get container status \"421b0ede8925517209f95150f2d215e43addb82043e18d9924ca046c914426a7\": rpc error: code = NotFound desc = could not find container \"421b0ede8925517209f95150f2d215e43addb82043e18d9924ca046c914426a7\": container with ID starting with 421b0ede8925517209f95150f2d215e43addb82043e18d9924ca046c914426a7 not found: ID does not exist" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.116428 4875 scope.go:117] "RemoveContainer" containerID="5c81278d4adf15d31e029c363f53fd55f6391e618a818df7298fc059c800b767" Jan 30 17:32:41 crc kubenswrapper[4875]: E0130 17:32:41.116671 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c81278d4adf15d31e029c363f53fd55f6391e618a818df7298fc059c800b767\": container with ID starting with 5c81278d4adf15d31e029c363f53fd55f6391e618a818df7298fc059c800b767 not found: ID does not exist" containerID="5c81278d4adf15d31e029c363f53fd55f6391e618a818df7298fc059c800b767" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.116708 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c81278d4adf15d31e029c363f53fd55f6391e618a818df7298fc059c800b767"} err="failed to get container status \"5c81278d4adf15d31e029c363f53fd55f6391e618a818df7298fc059c800b767\": rpc error: code = NotFound desc = could not find container \"5c81278d4adf15d31e029c363f53fd55f6391e618a818df7298fc059c800b767\": container with ID starting with 5c81278d4adf15d31e029c363f53fd55f6391e618a818df7298fc059c800b767 not found: ID does not exist" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.116726 4875 scope.go:117] "RemoveContainer" containerID="17445b7302a8e9b9402da4beec82d13d1393a16402060527580784a7a5e880ad" Jan 30 17:32:41 crc kubenswrapper[4875]: E0130 17:32:41.116909 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17445b7302a8e9b9402da4beec82d13d1393a16402060527580784a7a5e880ad\": container with ID starting with 17445b7302a8e9b9402da4beec82d13d1393a16402060527580784a7a5e880ad not found: ID does not exist" containerID="17445b7302a8e9b9402da4beec82d13d1393a16402060527580784a7a5e880ad" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.116932 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17445b7302a8e9b9402da4beec82d13d1393a16402060527580784a7a5e880ad"} err="failed to get container status \"17445b7302a8e9b9402da4beec82d13d1393a16402060527580784a7a5e880ad\": rpc error: code = NotFound desc = could not find container \"17445b7302a8e9b9402da4beec82d13d1393a16402060527580784a7a5e880ad\": container with ID starting with 17445b7302a8e9b9402da4beec82d13d1393a16402060527580784a7a5e880ad not found: ID does not exist" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.136364 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-catalog-content\") pod \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\" (UID: \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\") " Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.136520 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-utilities\") pod \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\" (UID: \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\") " Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.136629 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-577r8\" (UniqueName: \"kubernetes.io/projected/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-kube-api-access-577r8\") pod \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\" (UID: \"2e3b9c8e-71ae-4699-9cc0-779287ff7fd5\") " Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.137349 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-utilities" (OuterVolumeSpecName: "utilities") pod "2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" (UID: "2e3b9c8e-71ae-4699-9cc0-779287ff7fd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.141448 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-kube-api-access-577r8" (OuterVolumeSpecName: "kube-api-access-577r8") pod "2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" (UID: "2e3b9c8e-71ae-4699-9cc0-779287ff7fd5"). InnerVolumeSpecName "kube-api-access-577r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.186355 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" (UID: "2e3b9c8e-71ae-4699-9cc0-779287ff7fd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.239628 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.239709 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.239745 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-577r8\" (UniqueName: \"kubernetes.io/projected/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5-kube-api-access-577r8\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.360240 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_e75a0606-ea82-4ab9-8245-feb3105a23ba/rabbitmq/0.log" Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.386716 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n6c5t"] Jan 30 17:32:41 crc kubenswrapper[4875]: I0130 17:32:41.391177 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n6c5t"] Jan 30 17:32:42 crc kubenswrapper[4875]: I0130 17:32:42.145101 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" path="/var/lib/kubelet/pods/2e3b9c8e-71ae-4699-9cc0-779287ff7fd5/volumes" Jan 30 17:32:56 crc kubenswrapper[4875]: I0130 17:32:56.287296 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:32:56 crc kubenswrapper[4875]: I0130 17:32:56.287907 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:33:11 crc kubenswrapper[4875]: I0130 17:33:11.012677 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n_7390a607-60b7-4f18-af7a-b4391c97a01f/extract/0.log" Jan 30 17:33:11 crc kubenswrapper[4875]: I0130 17:33:11.390756 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-mjlwh_be56ef14-c793-4e0a-82bb-4e29b4182e22/manager/0.log" Jan 30 17:33:11 crc kubenswrapper[4875]: I0130 17:33:11.783447 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd_f5b461b0-718a-4065-bf1d-db2860d2af04/extract/0.log" Jan 30 17:33:12 crc kubenswrapper[4875]: I0130 17:33:12.174420 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-dm9v4_4d112d50-a873-440f-b366-332c135cd9cf/manager/0.log" Jan 30 17:33:12 crc kubenswrapper[4875]: I0130 17:33:12.572411 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-znpxc_daa61e94-524b-445a-8086-63a4a3db6764/manager/0.log" Jan 30 17:33:12 crc kubenswrapper[4875]: I0130 17:33:12.968438 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-gbhbx_89036e1f-6293-456d-ae24-6a52b2a102d9/manager/0.log" Jan 30 17:33:13 crc kubenswrapper[4875]: I0130 17:33:13.618637 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-bvnzf_d6508139-1b0b-45c7-b307-901c0903370f/manager/0.log" Jan 30 17:33:14 crc kubenswrapper[4875]: I0130 17:33:14.102792 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-fpcz4_14395019-dadc-4326-8a88-3f8746438a60/manager/0.log" Jan 30 17:33:14 crc kubenswrapper[4875]: I0130 17:33:14.605901 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-frg6k_9a2f99f7-889a-4847-88f0-3241c2fa3353/manager/0.log" Jan 30 17:33:15 crc kubenswrapper[4875]: I0130 17:33:15.041618 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-fdmpd_792a5bfa-13bb-4e86-ab45-09dd184fcab3/manager/0.log" Jan 30 17:33:15 crc kubenswrapper[4875]: I0130 17:33:15.477724 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-cpvgb_1a65b1f7-9d89-4a8b-9af9-811495df5c5f/manager/0.log" Jan 30 17:33:15 crc kubenswrapper[4875]: I0130 17:33:15.870680 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-nzlnv_408af5cb-dfce-44ff-9b25-5378f194196f/manager/0.log" Jan 30 17:33:16 crc kubenswrapper[4875]: I0130 17:33:16.286210 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-d74js_a8c14e5e-0827-45c6-8e21-c524ad39fb11/manager/0.log" Jan 30 17:33:16 crc kubenswrapper[4875]: I0130 17:33:16.749065 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-w75bt_972271b3-306a-4015-be23-c1320e0c296e/manager/0.log" Jan 30 17:33:17 crc kubenswrapper[4875]: I0130 17:33:17.542803 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-64bd9bf7b6-llx69_bdc3f51f-4dc1-45bd-b26d-1cacf01f9097/manager/0.log" Jan 30 17:33:18 crc kubenswrapper[4875]: I0130 17:33:18.003960 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-pd8bb_87fba6ee-2538-48b8-8a3d-cdd9308305a6/registry-server/0.log" Jan 30 17:33:18 crc kubenswrapper[4875]: I0130 17:33:18.463309 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-h9cpk_044cc22a-35c3-49ac-8c70-80478ce3f670/manager/0.log" Jan 30 17:33:18 crc kubenswrapper[4875]: I0130 17:33:18.904479 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2_59490e66-2646-4a95-9b81-e372fbd2f921/manager/0.log" Jan 30 17:33:19 crc kubenswrapper[4875]: I0130 17:33:19.591908 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6f764c8dd-9ntw2_662b188b-86ea-439e-a40b-6284d49e476e/manager/0.log" Jan 30 17:33:19 crc kubenswrapper[4875]: I0130 17:33:19.987718 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wdth9_90d2ca44-318f-4c47-8a9e-2781ac1151e6/registry-server/0.log" Jan 30 17:33:20 crc kubenswrapper[4875]: I0130 17:33:20.393826 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-xnw72_cefac6c5-5765-4646-a5c1-9832fb0170d6/manager/0.log" Jan 30 17:33:20 crc kubenswrapper[4875]: I0130 17:33:20.785254 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-8dhn6_d3967345-0c3d-431b-8408-3f7beaba730d/manager/0.log" Jan 30 17:33:21 crc kubenswrapper[4875]: I0130 17:33:21.174491 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nj2ss_86be17ce-228e-46ba-84df-5134bdb00c99/operator/0.log" Jan 30 17:33:21 crc kubenswrapper[4875]: I0130 17:33:21.578126 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-zxs9g_f4df6dfd-91eb-4d61-93fd-b93e111eb127/manager/0.log" Jan 30 17:33:21 crc kubenswrapper[4875]: I0130 17:33:21.992472 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-ns9pg_921d8e30-00c8-43e3-b44a-4de9e4450ba2/manager/0.log" Jan 30 17:33:22 crc kubenswrapper[4875]: I0130 17:33:22.356314 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-ld7cp_128260c8-c860-43f1-acd0-b5d9ed7d3f01/manager/0.log" Jan 30 17:33:22 crc kubenswrapper[4875]: I0130 17:33:22.767021 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-z4fxd_66127cf7-84e7-4bb6-9830-936f7e20586d/manager/0.log" Jan 30 17:33:26 crc kubenswrapper[4875]: I0130 17:33:26.287103 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:33:26 crc kubenswrapper[4875]: I0130 17:33:26.287409 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:33:26 crc kubenswrapper[4875]: I0130 17:33:26.287452 4875 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" Jan 30 17:33:26 crc kubenswrapper[4875]: I0130 17:33:26.288030 4875 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52"} pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:33:26 crc kubenswrapper[4875]: I0130 17:33:26.288091 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" containerID="cri-o://704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" gracePeriod=600 Jan 30 17:33:26 crc kubenswrapper[4875]: E0130 17:33:26.408748 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:33:26 crc kubenswrapper[4875]: I0130 17:33:26.420734 4875 generic.go:334] "Generic (PLEG): container finished" podID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" exitCode=0 Jan 30 17:33:26 crc kubenswrapper[4875]: I0130 17:33:26.420792 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerDied","Data":"704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52"} Jan 30 17:33:26 crc kubenswrapper[4875]: I0130 17:33:26.420836 4875 scope.go:117] "RemoveContainer" containerID="8b766e41a157db7a703015b0504adf1f01b15a6ef061e2f64f148c69531ba279" Jan 30 17:33:26 crc kubenswrapper[4875]: I0130 17:33:26.421563 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:33:26 crc kubenswrapper[4875]: E0130 17:33:26.421921 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:33:33 crc kubenswrapper[4875]: I0130 17:33:33.261361 4875 scope.go:117] "RemoveContainer" containerID="18a7ab848c358b391a6491ffb397203e51c07cef2e2d9b7874e3ee22c65212e7" Jan 30 17:33:33 crc kubenswrapper[4875]: I0130 17:33:33.287745 4875 scope.go:117] "RemoveContainer" containerID="4c435c7ff80e5e5253664e8e72ca2c8f0719ce98d14380cfc6f3755cd26ea028" Jan 30 17:33:33 crc kubenswrapper[4875]: I0130 17:33:33.327515 4875 scope.go:117] "RemoveContainer" containerID="e63343d8e7d1b1b510ca26306da702264278c4cb9e3a6e9f3c45d989ecaca591" Jan 30 17:33:33 crc kubenswrapper[4875]: I0130 17:33:33.369042 4875 scope.go:117] "RemoveContainer" containerID="ffcebc834d43459befd2e672b1e1b9a2c97b6252c714163806ce8712c364c5fb" Jan 30 17:33:33 crc kubenswrapper[4875]: I0130 17:33:33.392019 4875 scope.go:117] "RemoveContainer" containerID="7131a5fc87461b1befe413ec73a906a9795e46a30c3a7c912be498a59cdb76e8" Jan 30 17:33:33 crc kubenswrapper[4875]: I0130 17:33:33.418060 4875 scope.go:117] "RemoveContainer" containerID="8ad7a84a9a0f8ecde34599f4cbadc73becf21c82ce295d936758120386a061bc" Jan 30 17:33:33 crc kubenswrapper[4875]: I0130 17:33:33.448039 4875 scope.go:117] "RemoveContainer" containerID="d0810e56920bc76be7cd83273db37e38c94236b7920caf67465d7efb61e2d763" Jan 30 17:33:33 crc kubenswrapper[4875]: I0130 17:33:33.468917 4875 scope.go:117] "RemoveContainer" containerID="9fff9d8d9f07906d6e7a84d89cc9440aed1329c3a5c5d350600e526f00a7436f" Jan 30 17:33:33 crc kubenswrapper[4875]: I0130 17:33:33.485786 4875 scope.go:117] "RemoveContainer" containerID="5135aaed955a43e8e67672677d3c0535de6394613b6edba99a19074341436113" Jan 30 17:33:38 crc kubenswrapper[4875]: I0130 17:33:38.136844 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:33:38 crc kubenswrapper[4875]: E0130 17:33:38.137846 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:33:43 crc kubenswrapper[4875]: I0130 17:33:43.766074 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9h7dz/must-gather-nnzxw"] Jan 30 17:33:43 crc kubenswrapper[4875]: E0130 17:33:43.766862 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" containerName="extract-utilities" Jan 30 17:33:43 crc kubenswrapper[4875]: I0130 17:33:43.766874 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" containerName="extract-utilities" Jan 30 17:33:43 crc kubenswrapper[4875]: E0130 17:33:43.766893 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" containerName="registry-server" Jan 30 17:33:43 crc kubenswrapper[4875]: I0130 17:33:43.766899 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" containerName="registry-server" Jan 30 17:33:43 crc kubenswrapper[4875]: E0130 17:33:43.766905 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" containerName="extract-content" Jan 30 17:33:43 crc kubenswrapper[4875]: I0130 17:33:43.766913 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" containerName="extract-content" Jan 30 17:33:43 crc kubenswrapper[4875]: I0130 17:33:43.767059 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e3b9c8e-71ae-4699-9cc0-779287ff7fd5" containerName="registry-server" Jan 30 17:33:43 crc kubenswrapper[4875]: I0130 17:33:43.768058 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9h7dz/must-gather-nnzxw" Jan 30 17:33:43 crc kubenswrapper[4875]: I0130 17:33:43.772410 4875 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9h7dz"/"default-dockercfg-pbdt4" Jan 30 17:33:43 crc kubenswrapper[4875]: I0130 17:33:43.772450 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9h7dz"/"kube-root-ca.crt" Jan 30 17:33:43 crc kubenswrapper[4875]: I0130 17:33:43.772483 4875 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9h7dz"/"openshift-service-ca.crt" Jan 30 17:33:43 crc kubenswrapper[4875]: I0130 17:33:43.779982 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9h7dz/must-gather-nnzxw"] Jan 30 17:33:43 crc kubenswrapper[4875]: I0130 17:33:43.962348 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt9c8\" (UniqueName: \"kubernetes.io/projected/ec719a69-b6fe-4e09-b38b-1329f5e1355c-kube-api-access-kt9c8\") pod \"must-gather-nnzxw\" (UID: \"ec719a69-b6fe-4e09-b38b-1329f5e1355c\") " pod="openshift-must-gather-9h7dz/must-gather-nnzxw" Jan 30 17:33:43 crc kubenswrapper[4875]: I0130 17:33:43.962488 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec719a69-b6fe-4e09-b38b-1329f5e1355c-must-gather-output\") pod \"must-gather-nnzxw\" (UID: \"ec719a69-b6fe-4e09-b38b-1329f5e1355c\") " pod="openshift-must-gather-9h7dz/must-gather-nnzxw" Jan 30 17:33:44 crc kubenswrapper[4875]: I0130 17:33:44.063576 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec719a69-b6fe-4e09-b38b-1329f5e1355c-must-gather-output\") pod \"must-gather-nnzxw\" (UID: \"ec719a69-b6fe-4e09-b38b-1329f5e1355c\") " pod="openshift-must-gather-9h7dz/must-gather-nnzxw" Jan 30 17:33:44 crc kubenswrapper[4875]: I0130 17:33:44.063816 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt9c8\" (UniqueName: \"kubernetes.io/projected/ec719a69-b6fe-4e09-b38b-1329f5e1355c-kube-api-access-kt9c8\") pod \"must-gather-nnzxw\" (UID: \"ec719a69-b6fe-4e09-b38b-1329f5e1355c\") " pod="openshift-must-gather-9h7dz/must-gather-nnzxw" Jan 30 17:33:44 crc kubenswrapper[4875]: I0130 17:33:44.064139 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec719a69-b6fe-4e09-b38b-1329f5e1355c-must-gather-output\") pod \"must-gather-nnzxw\" (UID: \"ec719a69-b6fe-4e09-b38b-1329f5e1355c\") " pod="openshift-must-gather-9h7dz/must-gather-nnzxw" Jan 30 17:33:44 crc kubenswrapper[4875]: I0130 17:33:44.082349 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt9c8\" (UniqueName: \"kubernetes.io/projected/ec719a69-b6fe-4e09-b38b-1329f5e1355c-kube-api-access-kt9c8\") pod \"must-gather-nnzxw\" (UID: \"ec719a69-b6fe-4e09-b38b-1329f5e1355c\") " pod="openshift-must-gather-9h7dz/must-gather-nnzxw" Jan 30 17:33:44 crc kubenswrapper[4875]: I0130 17:33:44.131381 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9h7dz/must-gather-nnzxw" Jan 30 17:33:44 crc kubenswrapper[4875]: I0130 17:33:44.559802 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9h7dz/must-gather-nnzxw"] Jan 30 17:33:44 crc kubenswrapper[4875]: I0130 17:33:44.567638 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9h7dz/must-gather-nnzxw" event={"ID":"ec719a69-b6fe-4e09-b38b-1329f5e1355c","Type":"ContainerStarted","Data":"545833d8d11f75d1b6f3d74b58e27eea33c2d1b97932d72af9909506ba89edc5"} Jan 30 17:33:49 crc kubenswrapper[4875]: I0130 17:33:49.136599 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:33:49 crc kubenswrapper[4875]: E0130 17:33:49.137417 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:33:49 crc kubenswrapper[4875]: I0130 17:33:49.619522 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9h7dz/must-gather-nnzxw" event={"ID":"ec719a69-b6fe-4e09-b38b-1329f5e1355c","Type":"ContainerStarted","Data":"90fb865ef25622cd33cac2dd29b03c939066868fd2c89759dcc8101ec705f947"} Jan 30 17:33:49 crc kubenswrapper[4875]: I0130 17:33:49.619568 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9h7dz/must-gather-nnzxw" event={"ID":"ec719a69-b6fe-4e09-b38b-1329f5e1355c","Type":"ContainerStarted","Data":"3a0d91f63633e5db5a91fa404943642090eae672871272f061a701596f6b3df2"} Jan 30 17:33:49 crc kubenswrapper[4875]: I0130 17:33:49.638124 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9h7dz/must-gather-nnzxw" podStartSLOduration=2.153831691 podStartE2EDuration="6.63810241s" podCreationTimestamp="2026-01-30 17:33:43 +0000 UTC" firstStartedPulling="2026-01-30 17:33:44.557541351 +0000 UTC m=+2235.104904734" lastFinishedPulling="2026-01-30 17:33:49.04181207 +0000 UTC m=+2239.589175453" observedRunningTime="2026-01-30 17:33:49.632873803 +0000 UTC m=+2240.180237176" watchObservedRunningTime="2026-01-30 17:33:49.63810241 +0000 UTC m=+2240.185465803" Jan 30 17:34:01 crc kubenswrapper[4875]: I0130 17:34:01.136259 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:34:01 crc kubenswrapper[4875]: E0130 17:34:01.137028 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:34:12 crc kubenswrapper[4875]: I0130 17:34:12.136176 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:34:12 crc kubenswrapper[4875]: E0130 17:34:12.136894 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:34:27 crc kubenswrapper[4875]: I0130 17:34:27.136229 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:34:27 crc kubenswrapper[4875]: E0130 17:34:27.137014 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:34:33 crc kubenswrapper[4875]: I0130 17:34:33.682366 4875 scope.go:117] "RemoveContainer" containerID="bcd10314f3ccef71c79e77546abed5f566274be35a94b903e61d1915107e2bdd" Jan 30 17:34:33 crc kubenswrapper[4875]: I0130 17:34:33.703934 4875 scope.go:117] "RemoveContainer" containerID="2be2a9e37c333e0f75cad0d6af4d18570a560f5bfe64aa3694964dfcb1112503" Jan 30 17:34:33 crc kubenswrapper[4875]: I0130 17:34:33.744275 4875 scope.go:117] "RemoveContainer" containerID="967fd9e64f6903e19dee956b5e2fe5943168c04ecb829e537394f3feee298eba" Jan 30 17:34:33 crc kubenswrapper[4875]: I0130 17:34:33.782097 4875 scope.go:117] "RemoveContainer" containerID="94f3af0360fd6badd605b830b7231cd9bce2de25e8225e009bfc0631503624fd" Jan 30 17:34:42 crc kubenswrapper[4875]: I0130 17:34:42.136436 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:34:42 crc kubenswrapper[4875]: E0130 17:34:42.137225 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:34:48 crc kubenswrapper[4875]: I0130 17:34:48.185704 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n_7390a607-60b7-4f18-af7a-b4391c97a01f/util/0.log" Jan 30 17:34:48 crc kubenswrapper[4875]: I0130 17:34:48.385003 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n_7390a607-60b7-4f18-af7a-b4391c97a01f/pull/0.log" Jan 30 17:34:48 crc kubenswrapper[4875]: I0130 17:34:48.388710 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n_7390a607-60b7-4f18-af7a-b4391c97a01f/util/0.log" Jan 30 17:34:48 crc kubenswrapper[4875]: I0130 17:34:48.395524 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n_7390a607-60b7-4f18-af7a-b4391c97a01f/pull/0.log" Jan 30 17:34:48 crc kubenswrapper[4875]: I0130 17:34:48.541423 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n_7390a607-60b7-4f18-af7a-b4391c97a01f/util/0.log" Jan 30 17:34:48 crc kubenswrapper[4875]: I0130 17:34:48.554630 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n_7390a607-60b7-4f18-af7a-b4391c97a01f/extract/0.log" Jan 30 17:34:48 crc kubenswrapper[4875]: I0130 17:34:48.583010 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3db87763b33e532ce377c07e54d35eddae23e7d7e90586e1e899201350q6b8n_7390a607-60b7-4f18-af7a-b4391c97a01f/pull/0.log" Jan 30 17:34:48 crc kubenswrapper[4875]: I0130 17:34:48.700246 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-mjlwh_be56ef14-c793-4e0a-82bb-4e29b4182e22/manager/0.log" Jan 30 17:34:48 crc kubenswrapper[4875]: I0130 17:34:48.763907 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd_f5b461b0-718a-4065-bf1d-db2860d2af04/util/0.log" Jan 30 17:34:48 crc kubenswrapper[4875]: I0130 17:34:48.915111 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd_f5b461b0-718a-4065-bf1d-db2860d2af04/util/0.log" Jan 30 17:34:48 crc kubenswrapper[4875]: I0130 17:34:48.923996 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd_f5b461b0-718a-4065-bf1d-db2860d2af04/pull/0.log" Jan 30 17:34:48 crc kubenswrapper[4875]: I0130 17:34:48.932461 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd_f5b461b0-718a-4065-bf1d-db2860d2af04/pull/0.log" Jan 30 17:34:49 crc kubenswrapper[4875]: I0130 17:34:49.128598 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd_f5b461b0-718a-4065-bf1d-db2860d2af04/util/0.log" Jan 30 17:34:49 crc kubenswrapper[4875]: I0130 17:34:49.136415 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd_f5b461b0-718a-4065-bf1d-db2860d2af04/pull/0.log" Jan 30 17:34:49 crc kubenswrapper[4875]: I0130 17:34:49.147721 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c03751a44af5842905226f8f1dfb5683231cc8a01f7c669d66b307a0a1gd7fd_f5b461b0-718a-4065-bf1d-db2860d2af04/extract/0.log" Jan 30 17:34:49 crc kubenswrapper[4875]: I0130 17:34:49.295803 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-dm9v4_4d112d50-a873-440f-b366-332c135cd9cf/manager/0.log" Jan 30 17:34:49 crc kubenswrapper[4875]: I0130 17:34:49.356004 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-znpxc_daa61e94-524b-445a-8086-63a4a3db6764/manager/0.log" Jan 30 17:34:49 crc kubenswrapper[4875]: I0130 17:34:49.487746 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-gbhbx_89036e1f-6293-456d-ae24-6a52b2a102d9/manager/0.log" Jan 30 17:34:49 crc kubenswrapper[4875]: I0130 17:34:49.551387 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-bvnzf_d6508139-1b0b-45c7-b307-901c0903370f/manager/0.log" Jan 30 17:34:49 crc kubenswrapper[4875]: I0130 17:34:49.665972 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-fpcz4_14395019-dadc-4326-8a88-3f8746438a60/manager/0.log" Jan 30 17:34:49 crc kubenswrapper[4875]: I0130 17:34:49.805367 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-frg6k_9a2f99f7-889a-4847-88f0-3241c2fa3353/manager/0.log" Jan 30 17:34:49 crc kubenswrapper[4875]: I0130 17:34:49.891419 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-fdmpd_792a5bfa-13bb-4e86-ab45-09dd184fcab3/manager/0.log" Jan 30 17:34:50 crc kubenswrapper[4875]: I0130 17:34:50.026545 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-cpvgb_1a65b1f7-9d89-4a8b-9af9-811495df5c5f/manager/0.log" Jan 30 17:34:50 crc kubenswrapper[4875]: I0130 17:34:50.096110 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-nzlnv_408af5cb-dfce-44ff-9b25-5378f194196f/manager/0.log" Jan 30 17:34:50 crc kubenswrapper[4875]: I0130 17:34:50.190619 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-d74js_a8c14e5e-0827-45c6-8e21-c524ad39fb11/manager/0.log" Jan 30 17:34:50 crc kubenswrapper[4875]: I0130 17:34:50.270385 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-w75bt_972271b3-306a-4015-be23-c1320e0c296e/manager/0.log" Jan 30 17:34:50 crc kubenswrapper[4875]: I0130 17:34:50.531079 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-pd8bb_87fba6ee-2538-48b8-8a3d-cdd9308305a6/registry-server/0.log" Jan 30 17:34:50 crc kubenswrapper[4875]: I0130 17:34:50.660935 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-64bd9bf7b6-llx69_bdc3f51f-4dc1-45bd-b26d-1cacf01f9097/manager/0.log" Jan 30 17:34:50 crc kubenswrapper[4875]: I0130 17:34:50.672057 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-h9cpk_044cc22a-35c3-49ac-8c70-80478ce3f670/manager/0.log" Jan 30 17:34:50 crc kubenswrapper[4875]: I0130 17:34:50.841575 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dm2tp2_59490e66-2646-4a95-9b81-e372fbd2f921/manager/0.log" Jan 30 17:34:51 crc kubenswrapper[4875]: I0130 17:34:51.069953 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wdth9_90d2ca44-318f-4c47-8a9e-2781ac1151e6/registry-server/0.log" Jan 30 17:34:51 crc kubenswrapper[4875]: I0130 17:34:51.185049 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6f764c8dd-9ntw2_662b188b-86ea-439e-a40b-6284d49e476e/manager/0.log" Jan 30 17:34:51 crc kubenswrapper[4875]: I0130 17:34:51.229297 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-xnw72_cefac6c5-5765-4646-a5c1-9832fb0170d6/manager/0.log" Jan 30 17:34:51 crc kubenswrapper[4875]: I0130 17:34:51.390706 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-8dhn6_d3967345-0c3d-431b-8408-3f7beaba730d/manager/0.log" Jan 30 17:34:51 crc kubenswrapper[4875]: I0130 17:34:51.443655 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nj2ss_86be17ce-228e-46ba-84df-5134bdb00c99/operator/0.log" Jan 30 17:34:51 crc kubenswrapper[4875]: I0130 17:34:51.559730 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-zxs9g_f4df6dfd-91eb-4d61-93fd-b93e111eb127/manager/0.log" Jan 30 17:34:51 crc kubenswrapper[4875]: I0130 17:34:51.655063 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-ns9pg_921d8e30-00c8-43e3-b44a-4de9e4450ba2/manager/0.log" Jan 30 17:34:51 crc kubenswrapper[4875]: I0130 17:34:51.773693 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-ld7cp_128260c8-c860-43f1-acd0-b5d9ed7d3f01/manager/0.log" Jan 30 17:34:52 crc kubenswrapper[4875]: I0130 17:34:52.047758 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-z4fxd_66127cf7-84e7-4bb6-9830-936f7e20586d/manager/0.log" Jan 30 17:34:56 crc kubenswrapper[4875]: I0130 17:34:56.136134 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:34:56 crc kubenswrapper[4875]: E0130 17:34:56.136656 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:35:09 crc kubenswrapper[4875]: I0130 17:35:09.330319 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2njb9_0ce1959e-9d34-4221-8ede-5ec652b44b0d/control-plane-machine-set-operator/0.log" Jan 30 17:35:09 crc kubenswrapper[4875]: I0130 17:35:09.564193 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-j2q7s_56f1b088-2293-4064-b76b-40b9bc9ef3d5/kube-rbac-proxy/0.log" Jan 30 17:35:09 crc kubenswrapper[4875]: I0130 17:35:09.623245 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-j2q7s_56f1b088-2293-4064-b76b-40b9bc9ef3d5/machine-api-operator/0.log" Jan 30 17:35:10 crc kubenswrapper[4875]: I0130 17:35:10.143505 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:35:10 crc kubenswrapper[4875]: E0130 17:35:10.143919 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:35:21 crc kubenswrapper[4875]: I0130 17:35:21.058267 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-4nqc6_b355c16e-74db-4e9c-b779-6a921fff40fb/cert-manager-controller/0.log" Jan 30 17:35:21 crc kubenswrapper[4875]: I0130 17:35:21.262043 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-vtttm_20f472cd-b250-40c1-bef3-3e32a16443a4/cert-manager-cainjector/0.log" Jan 30 17:35:21 crc kubenswrapper[4875]: I0130 17:35:21.297868 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-swrxh_cff87141-71be-4df3-b630-9724d884f3ca/cert-manager-webhook/0.log" Jan 30 17:35:22 crc kubenswrapper[4875]: I0130 17:35:22.135715 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:35:22 crc kubenswrapper[4875]: E0130 17:35:22.136500 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:35:32 crc kubenswrapper[4875]: I0130 17:35:32.337124 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-cjpzb_27bed214-93d4-493b-a471-2f0913007e55/nmstate-console-plugin/0.log" Jan 30 17:35:32 crc kubenswrapper[4875]: I0130 17:35:32.497496 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-s6n6v_227eb898-0116-4963-9c36-991e1d69089b/nmstate-handler/0.log" Jan 30 17:35:32 crc kubenswrapper[4875]: I0130 17:35:32.545842 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k57t9_10d88af6-3015-4590-af17-92693e9d5c2d/kube-rbac-proxy/0.log" Jan 30 17:35:32 crc kubenswrapper[4875]: I0130 17:35:32.572573 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k57t9_10d88af6-3015-4590-af17-92693e9d5c2d/nmstate-metrics/0.log" Jan 30 17:35:32 crc kubenswrapper[4875]: I0130 17:35:32.754147 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-q6zbs_f004abd4-e3a2-4f6e-8c3c-85202b7a4b9f/nmstate-operator/0.log" Jan 30 17:35:32 crc kubenswrapper[4875]: I0130 17:35:32.794250 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-47mwk_07aa98a9-5198-4088-abe2-c57d80a64e3e/nmstate-webhook/0.log" Jan 30 17:35:37 crc kubenswrapper[4875]: I0130 17:35:37.135768 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:35:37 crc kubenswrapper[4875]: E0130 17:35:37.136685 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:35:52 crc kubenswrapper[4875]: I0130 17:35:52.136319 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:35:52 crc kubenswrapper[4875]: E0130 17:35:52.137111 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:35:59 crc kubenswrapper[4875]: I0130 17:35:59.725374 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-5sf9s_93d86069-0a11-45c8-8438-f10ddb9b0dc5/kube-rbac-proxy/0.log" Jan 30 17:35:59 crc kubenswrapper[4875]: I0130 17:35:59.815121 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-5sf9s_93d86069-0a11-45c8-8438-f10ddb9b0dc5/controller/0.log" Jan 30 17:35:59 crc kubenswrapper[4875]: I0130 17:35:59.936425 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/cp-frr-files/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.074790 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/cp-frr-files/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.102383 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/cp-metrics/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.120438 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/cp-reloader/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.140613 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/cp-reloader/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.296571 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/cp-metrics/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.318623 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/cp-reloader/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.323337 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/cp-frr-files/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.347610 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/cp-metrics/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.526891 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/cp-reloader/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.530372 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/cp-frr-files/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.558619 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/cp-metrics/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.589150 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/controller/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.703779 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/frr-metrics/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.718564 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/kube-rbac-proxy/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.790000 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/kube-rbac-proxy-frr/0.log" Jan 30 17:36:00 crc kubenswrapper[4875]: I0130 17:36:00.943467 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/reloader/0.log" Jan 30 17:36:01 crc kubenswrapper[4875]: I0130 17:36:01.009654 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-l2qcq_b726108f-6096-4549-a56e-4aaef276d309/frr-k8s-webhook-server/0.log" Jan 30 17:36:01 crc kubenswrapper[4875]: I0130 17:36:01.203650 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6f788d9fdf-mb5fc_597e5eb9-1876-4309-b8e1-a870c946cfc0/manager/0.log" Jan 30 17:36:01 crc kubenswrapper[4875]: I0130 17:36:01.390937 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-d45878f5b-stwlx_bdc69284-9636-490f-97ca-8e32af6b9144/webhook-server/0.log" Jan 30 17:36:01 crc kubenswrapper[4875]: I0130 17:36:01.454521 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2t6jc_edcbb6f3-6630-4b11-a936-873403d63ecb/kube-rbac-proxy/0.log" Jan 30 17:36:01 crc kubenswrapper[4875]: I0130 17:36:01.848642 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qznj9_099cb5be-6270-4a46-b135-560981a13b91/frr/0.log" Jan 30 17:36:01 crc kubenswrapper[4875]: I0130 17:36:01.938501 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2t6jc_edcbb6f3-6630-4b11-a936-873403d63ecb/speaker/0.log" Jan 30 17:36:07 crc kubenswrapper[4875]: I0130 17:36:07.135893 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:36:07 crc kubenswrapper[4875]: E0130 17:36:07.136869 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:36:15 crc kubenswrapper[4875]: I0130 17:36:15.269476 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-b6888cc46-89gfr_e95a5815-f333-496a-a3cc-e568c1ded6ba/keystone-api/0.log" Jan 30 17:36:15 crc kubenswrapper[4875]: I0130 17:36:15.546559 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_83732f39-75fd-4817-be96-f954dcc5fd96/mysql-bootstrap/0.log" Jan 30 17:36:15 crc kubenswrapper[4875]: I0130 17:36:15.706614 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_83732f39-75fd-4817-be96-f954dcc5fd96/mysql-bootstrap/0.log" Jan 30 17:36:15 crc kubenswrapper[4875]: I0130 17:36:15.856990 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_83732f39-75fd-4817-be96-f954dcc5fd96/galera/0.log" Jan 30 17:36:15 crc kubenswrapper[4875]: I0130 17:36:15.986065 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_2651f38f-c3ae-4970-ab34-7b9540d5aa24/mysql-bootstrap/0.log" Jan 30 17:36:16 crc kubenswrapper[4875]: I0130 17:36:16.153014 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_2651f38f-c3ae-4970-ab34-7b9540d5aa24/mysql-bootstrap/0.log" Jan 30 17:36:16 crc kubenswrapper[4875]: I0130 17:36:16.206404 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_2651f38f-c3ae-4970-ab34-7b9540d5aa24/galera/0.log" Jan 30 17:36:16 crc kubenswrapper[4875]: I0130 17:36:16.359980 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_c4f3c910-b4f4-40cf-bf87-aabb54bb76c3/openstackclient/0.log" Jan 30 17:36:16 crc kubenswrapper[4875]: I0130 17:36:16.490015 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-696447b7b-gwj9q_2e4060f6-e91b-4f67-b959-9e2a125c05d3/placement-api/0.log" Jan 30 17:36:16 crc kubenswrapper[4875]: I0130 17:36:16.619020 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-696447b7b-gwj9q_2e4060f6-e91b-4f67-b959-9e2a125c05d3/placement-log/0.log" Jan 30 17:36:16 crc kubenswrapper[4875]: I0130 17:36:16.710852 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_2d4b13af-d4ec-458c-b3a9-e060171110f6/setup-container/0.log" Jan 30 17:36:16 crc kubenswrapper[4875]: I0130 17:36:16.902346 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_2d4b13af-d4ec-458c-b3a9-e060171110f6/setup-container/0.log" Jan 30 17:36:16 crc kubenswrapper[4875]: I0130 17:36:16.943360 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_e387e78d-25ab-454b-9b66-d2cc13abe676/memcached/0.log" Jan 30 17:36:16 crc kubenswrapper[4875]: I0130 17:36:16.954395 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_2d4b13af-d4ec-458c-b3a9-e060171110f6/rabbitmq/0.log" Jan 30 17:36:17 crc kubenswrapper[4875]: I0130 17:36:17.076959 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_b6ee4eec-358c-45f7-9b1a-143de69b2929/setup-container/0.log" Jan 30 17:36:17 crc kubenswrapper[4875]: I0130 17:36:17.267382 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_e75a0606-ea82-4ab9-8245-feb3105a23ba/setup-container/0.log" Jan 30 17:36:17 crc kubenswrapper[4875]: I0130 17:36:17.267667 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_b6ee4eec-358c-45f7-9b1a-143de69b2929/setup-container/0.log" Jan 30 17:36:17 crc kubenswrapper[4875]: I0130 17:36:17.296204 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_b6ee4eec-358c-45f7-9b1a-143de69b2929/rabbitmq/0.log" Jan 30 17:36:17 crc kubenswrapper[4875]: I0130 17:36:17.423004 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_e75a0606-ea82-4ab9-8245-feb3105a23ba/setup-container/0.log" Jan 30 17:36:17 crc kubenswrapper[4875]: I0130 17:36:17.458903 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_e75a0606-ea82-4ab9-8245-feb3105a23ba/rabbitmq/0.log" Jan 30 17:36:18 crc kubenswrapper[4875]: I0130 17:36:18.136437 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:36:18 crc kubenswrapper[4875]: E0130 17:36:18.136742 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:36:29 crc kubenswrapper[4875]: I0130 17:36:29.136375 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:36:29 crc kubenswrapper[4875]: E0130 17:36:29.137053 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:36:29 crc kubenswrapper[4875]: I0130 17:36:29.886045 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs_b4d7437b-5c96-4130-93dc-119f95d08e50/util/0.log" Jan 30 17:36:30 crc kubenswrapper[4875]: I0130 17:36:30.023356 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs_b4d7437b-5c96-4130-93dc-119f95d08e50/util/0.log" Jan 30 17:36:30 crc kubenswrapper[4875]: I0130 17:36:30.095019 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs_b4d7437b-5c96-4130-93dc-119f95d08e50/pull/0.log" Jan 30 17:36:30 crc kubenswrapper[4875]: I0130 17:36:30.157426 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs_b4d7437b-5c96-4130-93dc-119f95d08e50/pull/0.log" Jan 30 17:36:30 crc kubenswrapper[4875]: I0130 17:36:30.278350 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs_b4d7437b-5c96-4130-93dc-119f95d08e50/util/0.log" Jan 30 17:36:30 crc kubenswrapper[4875]: I0130 17:36:30.279398 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs_b4d7437b-5c96-4130-93dc-119f95d08e50/pull/0.log" Jan 30 17:36:30 crc kubenswrapper[4875]: I0130 17:36:30.306820 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dctxtjs_b4d7437b-5c96-4130-93dc-119f95d08e50/extract/0.log" Jan 30 17:36:30 crc kubenswrapper[4875]: I0130 17:36:30.445505 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb_0323f50d-c1fd-466c-ab03-020895b83c84/util/0.log" Jan 30 17:36:30 crc kubenswrapper[4875]: I0130 17:36:30.621743 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb_0323f50d-c1fd-466c-ab03-020895b83c84/util/0.log" Jan 30 17:36:30 crc kubenswrapper[4875]: I0130 17:36:30.646145 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb_0323f50d-c1fd-466c-ab03-020895b83c84/pull/0.log" Jan 30 17:36:30 crc kubenswrapper[4875]: I0130 17:36:30.667197 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb_0323f50d-c1fd-466c-ab03-020895b83c84/pull/0.log" Jan 30 17:36:30 crc kubenswrapper[4875]: I0130 17:36:30.825801 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb_0323f50d-c1fd-466c-ab03-020895b83c84/util/0.log" Jan 30 17:36:30 crc kubenswrapper[4875]: I0130 17:36:30.851261 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb_0323f50d-c1fd-466c-ab03-020895b83c84/pull/0.log" Jan 30 17:36:30 crc kubenswrapper[4875]: I0130 17:36:30.910049 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139vhsb_0323f50d-c1fd-466c-ab03-020895b83c84/extract/0.log" Jan 30 17:36:31 crc kubenswrapper[4875]: I0130 17:36:31.018823 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr_f6f44679-6e5c-49d2-b215-7af315008c79/util/0.log" Jan 30 17:36:31 crc kubenswrapper[4875]: I0130 17:36:31.204811 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr_f6f44679-6e5c-49d2-b215-7af315008c79/util/0.log" Jan 30 17:36:31 crc kubenswrapper[4875]: I0130 17:36:31.232886 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr_f6f44679-6e5c-49d2-b215-7af315008c79/pull/0.log" Jan 30 17:36:31 crc kubenswrapper[4875]: I0130 17:36:31.243497 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr_f6f44679-6e5c-49d2-b215-7af315008c79/pull/0.log" Jan 30 17:36:31 crc kubenswrapper[4875]: I0130 17:36:31.395868 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr_f6f44679-6e5c-49d2-b215-7af315008c79/pull/0.log" Jan 30 17:36:31 crc kubenswrapper[4875]: I0130 17:36:31.397772 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr_f6f44679-6e5c-49d2-b215-7af315008c79/util/0.log" Jan 30 17:36:31 crc kubenswrapper[4875]: I0130 17:36:31.409070 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5vl4wr_f6f44679-6e5c-49d2-b215-7af315008c79/extract/0.log" Jan 30 17:36:31 crc kubenswrapper[4875]: I0130 17:36:31.552342 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgw9d_2247927f-781b-4017-87f0-90143313e690/extract-utilities/0.log" Jan 30 17:36:31 crc kubenswrapper[4875]: I0130 17:36:31.700071 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgw9d_2247927f-781b-4017-87f0-90143313e690/extract-utilities/0.log" Jan 30 17:36:31 crc kubenswrapper[4875]: I0130 17:36:31.728808 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgw9d_2247927f-781b-4017-87f0-90143313e690/extract-content/0.log" Jan 30 17:36:31 crc kubenswrapper[4875]: I0130 17:36:31.757645 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgw9d_2247927f-781b-4017-87f0-90143313e690/extract-content/0.log" Jan 30 17:36:31 crc kubenswrapper[4875]: I0130 17:36:31.915527 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgw9d_2247927f-781b-4017-87f0-90143313e690/extract-content/0.log" Jan 30 17:36:31 crc kubenswrapper[4875]: I0130 17:36:31.934483 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgw9d_2247927f-781b-4017-87f0-90143313e690/extract-utilities/0.log" Jan 30 17:36:32 crc kubenswrapper[4875]: I0130 17:36:32.137833 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bfpqk_dc32276d-2194-4ac4-9a86-da06d803d46d/extract-utilities/0.log" Jan 30 17:36:32 crc kubenswrapper[4875]: I0130 17:36:32.146326 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jgw9d_2247927f-781b-4017-87f0-90143313e690/registry-server/0.log" Jan 30 17:36:32 crc kubenswrapper[4875]: I0130 17:36:32.320031 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bfpqk_dc32276d-2194-4ac4-9a86-da06d803d46d/extract-content/0.log" Jan 30 17:36:32 crc kubenswrapper[4875]: I0130 17:36:32.349112 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bfpqk_dc32276d-2194-4ac4-9a86-da06d803d46d/extract-utilities/0.log" Jan 30 17:36:32 crc kubenswrapper[4875]: I0130 17:36:32.366560 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bfpqk_dc32276d-2194-4ac4-9a86-da06d803d46d/extract-content/0.log" Jan 30 17:36:32 crc kubenswrapper[4875]: I0130 17:36:32.512541 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bfpqk_dc32276d-2194-4ac4-9a86-da06d803d46d/extract-utilities/0.log" Jan 30 17:36:32 crc kubenswrapper[4875]: I0130 17:36:32.518888 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bfpqk_dc32276d-2194-4ac4-9a86-da06d803d46d/extract-content/0.log" Jan 30 17:36:32 crc kubenswrapper[4875]: I0130 17:36:32.724653 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-j9hxl_ee16d58a-dd09-48a5-aa90-2788f5bd8fa2/marketplace-operator/0.log" Jan 30 17:36:32 crc kubenswrapper[4875]: I0130 17:36:32.946388 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-496j4_99ac87cd-0125-4818-9369-713bcd27baa1/extract-utilities/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.080527 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bfpqk_dc32276d-2194-4ac4-9a86-da06d803d46d/registry-server/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.136745 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-496j4_99ac87cd-0125-4818-9369-713bcd27baa1/extract-content/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.179161 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-496j4_99ac87cd-0125-4818-9369-713bcd27baa1/extract-utilities/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.203801 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-496j4_99ac87cd-0125-4818-9369-713bcd27baa1/extract-content/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.341564 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-496j4_99ac87cd-0125-4818-9369-713bcd27baa1/extract-content/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.341644 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-496j4_99ac87cd-0125-4818-9369-713bcd27baa1/extract-utilities/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.456399 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-496j4_99ac87cd-0125-4818-9369-713bcd27baa1/registry-server/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.555215 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gct2f_6596cd04-1bed-410b-8304-70d475ba79ee/extract-utilities/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.698124 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gct2f_6596cd04-1bed-410b-8304-70d475ba79ee/extract-utilities/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.708567 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gct2f_6596cd04-1bed-410b-8304-70d475ba79ee/extract-content/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.720291 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gct2f_6596cd04-1bed-410b-8304-70d475ba79ee/extract-content/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.871820 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gct2f_6596cd04-1bed-410b-8304-70d475ba79ee/extract-content/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.894982 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gct2f_6596cd04-1bed-410b-8304-70d475ba79ee/extract-utilities/0.log" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.908725 4875 scope.go:117] "RemoveContainer" containerID="767d982e7af4d83c650086b701a2fa9f9a5089fc861ca8cd8afe522f243d9970" Jan 30 17:36:33 crc kubenswrapper[4875]: I0130 17:36:33.967496 4875 scope.go:117] "RemoveContainer" containerID="4a9b06c1920eb9f6b0afeaff492600caf230684eee65e16565995580e7c402b6" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.001093 4875 scope.go:117] "RemoveContainer" containerID="0ed3ba63f5836d4284e5d0b19fd95871dcfd9e7c9402f3725a2f46fcf70bad3f" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.027654 4875 scope.go:117] "RemoveContainer" containerID="997a9a921c3442ae23e68a567b4c8b7589fd7a91c44f310e97c0cdfa685665ca" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.064816 4875 scope.go:117] "RemoveContainer" containerID="884f5e8bf932c01b78b8c37f8c809b1b3ef4d29853d1d14255a043960ed8ea2f" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.092797 4875 scope.go:117] "RemoveContainer" containerID="07b8cba6e8c49c3765f2197ce04d8326c1bb62115679b813ba0ebb0bca908f86" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.113322 4875 scope.go:117] "RemoveContainer" containerID="8a618579c1bc5c181ddc634f841afbceb7da691f7052ed3508f09c51a7ac8c14" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.153295 4875 scope.go:117] "RemoveContainer" containerID="b2f06f7d9a5c74971f735905abe0a8db492f48583eafd4afba815679681db8eb" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.177829 4875 scope.go:117] "RemoveContainer" containerID="390a3149f136c0c2a10de2c4276fe05eb29f07278aa8bce6e169e7d2e9928733" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.195671 4875 scope.go:117] "RemoveContainer" containerID="1c83ae29f08450fe361b967cc3c6634c1275f8b1383d44fc8ebad6147a18b38f" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.212850 4875 scope.go:117] "RemoveContainer" containerID="05b9c97ca737bffb2545d9a93b1e016613c7d56eda5749303cecae85e50b42aa" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.250867 4875 scope.go:117] "RemoveContainer" containerID="170ea35065a3a7a0019d371269ebeffdbd2f8bc3debdb53c930db1f18979556f" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.289749 4875 scope.go:117] "RemoveContainer" containerID="d1c70e9e66a5afcf12245057714ca2dd0767c123ca766889d49f554a0578dbd1" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.290803 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gct2f_6596cd04-1bed-410b-8304-70d475ba79ee/registry-server/0.log" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.337466 4875 scope.go:117] "RemoveContainer" containerID="348c9809cea5b0835d3a6a39e0b9a76a7319205cc07f3174ae2f8d1fb2dbe029" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.384412 4875 scope.go:117] "RemoveContainer" containerID="c084402e24b3ca5c167a0a8e077d2b1f367e48ebe16a30acf8dfd1ea7597d479" Jan 30 17:36:34 crc kubenswrapper[4875]: I0130 17:36:34.399118 4875 scope.go:117] "RemoveContainer" containerID="b6d156423146bb231253ca2e721349b2e472892e0e5224a367739c8da335d009" Jan 30 17:36:40 crc kubenswrapper[4875]: I0130 17:36:40.140603 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:36:40 crc kubenswrapper[4875]: E0130 17:36:40.141194 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:36:48 crc kubenswrapper[4875]: E0130 17:36:48.210245 4875 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.65:45894->38.129.56.65:43921: write tcp 38.129.56.65:45894->38.129.56.65:43921: write: broken pipe Jan 30 17:36:53 crc kubenswrapper[4875]: I0130 17:36:53.136539 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:36:53 crc kubenswrapper[4875]: E0130 17:36:53.137575 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:37:04 crc kubenswrapper[4875]: I0130 17:37:04.136100 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:37:04 crc kubenswrapper[4875]: E0130 17:37:04.137096 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:37:19 crc kubenswrapper[4875]: I0130 17:37:19.136535 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:37:19 crc kubenswrapper[4875]: E0130 17:37:19.137482 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:37:33 crc kubenswrapper[4875]: I0130 17:37:33.136038 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:37:33 crc kubenswrapper[4875]: E0130 17:37:33.137136 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:37:34 crc kubenswrapper[4875]: I0130 17:37:34.591068 4875 scope.go:117] "RemoveContainer" containerID="14c44674ed3f3726b851288b88991b9bbb5d77f52fb0bcc14b14b104a80d17f8" Jan 30 17:37:34 crc kubenswrapper[4875]: I0130 17:37:34.625186 4875 scope.go:117] "RemoveContainer" containerID="255aef2a1011cac29ec4a3195419ccb6464779ea8efb5b71a779497949cb44d4" Jan 30 17:37:34 crc kubenswrapper[4875]: I0130 17:37:34.668916 4875 scope.go:117] "RemoveContainer" containerID="bbfa23785eb18fe9d0fd851a0d2655426dfe59eae7a4164d11eb6912e983cb47" Jan 30 17:37:44 crc kubenswrapper[4875]: I0130 17:37:44.136394 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:37:44 crc kubenswrapper[4875]: E0130 17:37:44.138122 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:37:54 crc kubenswrapper[4875]: I0130 17:37:54.650855 4875 generic.go:334] "Generic (PLEG): container finished" podID="ec719a69-b6fe-4e09-b38b-1329f5e1355c" containerID="3a0d91f63633e5db5a91fa404943642090eae672871272f061a701596f6b3df2" exitCode=0 Jan 30 17:37:54 crc kubenswrapper[4875]: I0130 17:37:54.650950 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9h7dz/must-gather-nnzxw" event={"ID":"ec719a69-b6fe-4e09-b38b-1329f5e1355c","Type":"ContainerDied","Data":"3a0d91f63633e5db5a91fa404943642090eae672871272f061a701596f6b3df2"} Jan 30 17:37:54 crc kubenswrapper[4875]: I0130 17:37:54.651863 4875 scope.go:117] "RemoveContainer" containerID="3a0d91f63633e5db5a91fa404943642090eae672871272f061a701596f6b3df2" Jan 30 17:37:54 crc kubenswrapper[4875]: I0130 17:37:54.757964 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9h7dz_must-gather-nnzxw_ec719a69-b6fe-4e09-b38b-1329f5e1355c/gather/0.log" Jan 30 17:37:58 crc kubenswrapper[4875]: I0130 17:37:58.136243 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:37:58 crc kubenswrapper[4875]: E0130 17:37:58.136832 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:38:02 crc kubenswrapper[4875]: I0130 17:38:02.379265 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9h7dz/must-gather-nnzxw"] Jan 30 17:38:02 crc kubenswrapper[4875]: I0130 17:38:02.379925 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9h7dz/must-gather-nnzxw" podUID="ec719a69-b6fe-4e09-b38b-1329f5e1355c" containerName="copy" containerID="cri-o://90fb865ef25622cd33cac2dd29b03c939066868fd2c89759dcc8101ec705f947" gracePeriod=2 Jan 30 17:38:02 crc kubenswrapper[4875]: I0130 17:38:02.386512 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9h7dz/must-gather-nnzxw"] Jan 30 17:38:02 crc kubenswrapper[4875]: E0130 17:38:02.554081 4875 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec719a69_b6fe_4e09_b38b_1329f5e1355c.slice/crio-conmon-90fb865ef25622cd33cac2dd29b03c939066868fd2c89759dcc8101ec705f947.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec719a69_b6fe_4e09_b38b_1329f5e1355c.slice/crio-90fb865ef25622cd33cac2dd29b03c939066868fd2c89759dcc8101ec705f947.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:38:02 crc kubenswrapper[4875]: I0130 17:38:02.713291 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9h7dz_must-gather-nnzxw_ec719a69-b6fe-4e09-b38b-1329f5e1355c/copy/0.log" Jan 30 17:38:02 crc kubenswrapper[4875]: I0130 17:38:02.713916 4875 generic.go:334] "Generic (PLEG): container finished" podID="ec719a69-b6fe-4e09-b38b-1329f5e1355c" containerID="90fb865ef25622cd33cac2dd29b03c939066868fd2c89759dcc8101ec705f947" exitCode=143 Jan 30 17:38:02 crc kubenswrapper[4875]: I0130 17:38:02.757792 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9h7dz_must-gather-nnzxw_ec719a69-b6fe-4e09-b38b-1329f5e1355c/copy/0.log" Jan 30 17:38:02 crc kubenswrapper[4875]: I0130 17:38:02.758410 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9h7dz/must-gather-nnzxw" Jan 30 17:38:02 crc kubenswrapper[4875]: I0130 17:38:02.932905 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec719a69-b6fe-4e09-b38b-1329f5e1355c-must-gather-output\") pod \"ec719a69-b6fe-4e09-b38b-1329f5e1355c\" (UID: \"ec719a69-b6fe-4e09-b38b-1329f5e1355c\") " Jan 30 17:38:02 crc kubenswrapper[4875]: I0130 17:38:02.933179 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt9c8\" (UniqueName: \"kubernetes.io/projected/ec719a69-b6fe-4e09-b38b-1329f5e1355c-kube-api-access-kt9c8\") pod \"ec719a69-b6fe-4e09-b38b-1329f5e1355c\" (UID: \"ec719a69-b6fe-4e09-b38b-1329f5e1355c\") " Jan 30 17:38:02 crc kubenswrapper[4875]: I0130 17:38:02.950797 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec719a69-b6fe-4e09-b38b-1329f5e1355c-kube-api-access-kt9c8" (OuterVolumeSpecName: "kube-api-access-kt9c8") pod "ec719a69-b6fe-4e09-b38b-1329f5e1355c" (UID: "ec719a69-b6fe-4e09-b38b-1329f5e1355c"). InnerVolumeSpecName "kube-api-access-kt9c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:38:03 crc kubenswrapper[4875]: I0130 17:38:03.038307 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt9c8\" (UniqueName: \"kubernetes.io/projected/ec719a69-b6fe-4e09-b38b-1329f5e1355c-kube-api-access-kt9c8\") on node \"crc\" DevicePath \"\"" Jan 30 17:38:03 crc kubenswrapper[4875]: I0130 17:38:03.101096 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec719a69-b6fe-4e09-b38b-1329f5e1355c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ec719a69-b6fe-4e09-b38b-1329f5e1355c" (UID: "ec719a69-b6fe-4e09-b38b-1329f5e1355c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:38:03 crc kubenswrapper[4875]: I0130 17:38:03.140398 4875 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec719a69-b6fe-4e09-b38b-1329f5e1355c-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 17:38:03 crc kubenswrapper[4875]: I0130 17:38:03.721779 4875 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9h7dz_must-gather-nnzxw_ec719a69-b6fe-4e09-b38b-1329f5e1355c/copy/0.log" Jan 30 17:38:03 crc kubenswrapper[4875]: I0130 17:38:03.722433 4875 scope.go:117] "RemoveContainer" containerID="90fb865ef25622cd33cac2dd29b03c939066868fd2c89759dcc8101ec705f947" Jan 30 17:38:03 crc kubenswrapper[4875]: I0130 17:38:03.722549 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9h7dz/must-gather-nnzxw" Jan 30 17:38:03 crc kubenswrapper[4875]: I0130 17:38:03.748969 4875 scope.go:117] "RemoveContainer" containerID="3a0d91f63633e5db5a91fa404943642090eae672871272f061a701596f6b3df2" Jan 30 17:38:04 crc kubenswrapper[4875]: I0130 17:38:04.146018 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec719a69-b6fe-4e09-b38b-1329f5e1355c" path="/var/lib/kubelet/pods/ec719a69-b6fe-4e09-b38b-1329f5e1355c/volumes" Jan 30 17:38:12 crc kubenswrapper[4875]: I0130 17:38:12.136773 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:38:12 crc kubenswrapper[4875]: E0130 17:38:12.137734 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:38:24 crc kubenswrapper[4875]: I0130 17:38:24.135814 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:38:24 crc kubenswrapper[4875]: E0130 17:38:24.136572 4875 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wgsn_openshift-machine-config-operator(9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" Jan 30 17:38:39 crc kubenswrapper[4875]: I0130 17:38:39.136611 4875 scope.go:117] "RemoveContainer" containerID="704ae6a6adfdef396318b95fa2549a2e3f2436e391e8f6615dbd2d97bf207d52" Jan 30 17:38:39 crc kubenswrapper[4875]: I0130 17:38:39.975987 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" event={"ID":"9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8","Type":"ContainerStarted","Data":"0cb2d38d62a9eb38eabc1c8c299716547fd18c66ed70522295f80a9e0214118a"} Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.597043 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-556s6"] Jan 30 17:40:34 crc kubenswrapper[4875]: E0130 17:40:34.598103 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec719a69-b6fe-4e09-b38b-1329f5e1355c" containerName="gather" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.598132 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec719a69-b6fe-4e09-b38b-1329f5e1355c" containerName="gather" Jan 30 17:40:34 crc kubenswrapper[4875]: E0130 17:40:34.598148 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec719a69-b6fe-4e09-b38b-1329f5e1355c" containerName="copy" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.598154 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec719a69-b6fe-4e09-b38b-1329f5e1355c" containerName="copy" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.598303 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec719a69-b6fe-4e09-b38b-1329f5e1355c" containerName="copy" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.598313 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec719a69-b6fe-4e09-b38b-1329f5e1355c" containerName="gather" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.599426 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.613298 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-556s6"] Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.646009 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2946bc6d-d7c1-4550-952b-6df7af9c86f7-utilities\") pod \"community-operators-556s6\" (UID: \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\") " pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.646070 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2946bc6d-d7c1-4550-952b-6df7af9c86f7-catalog-content\") pod \"community-operators-556s6\" (UID: \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\") " pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.646143 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdzbq\" (UniqueName: \"kubernetes.io/projected/2946bc6d-d7c1-4550-952b-6df7af9c86f7-kube-api-access-qdzbq\") pod \"community-operators-556s6\" (UID: \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\") " pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.747979 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdzbq\" (UniqueName: \"kubernetes.io/projected/2946bc6d-d7c1-4550-952b-6df7af9c86f7-kube-api-access-qdzbq\") pod \"community-operators-556s6\" (UID: \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\") " pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.748283 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2946bc6d-d7c1-4550-952b-6df7af9c86f7-utilities\") pod \"community-operators-556s6\" (UID: \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\") " pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.748304 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2946bc6d-d7c1-4550-952b-6df7af9c86f7-catalog-content\") pod \"community-operators-556s6\" (UID: \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\") " pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.748764 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2946bc6d-d7c1-4550-952b-6df7af9c86f7-utilities\") pod \"community-operators-556s6\" (UID: \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\") " pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.748789 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2946bc6d-d7c1-4550-952b-6df7af9c86f7-catalog-content\") pod \"community-operators-556s6\" (UID: \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\") " pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.769060 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdzbq\" (UniqueName: \"kubernetes.io/projected/2946bc6d-d7c1-4550-952b-6df7af9c86f7-kube-api-access-qdzbq\") pod \"community-operators-556s6\" (UID: \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\") " pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:34 crc kubenswrapper[4875]: I0130 17:40:34.921809 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:35 crc kubenswrapper[4875]: I0130 17:40:35.485565 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-556s6"] Jan 30 17:40:35 crc kubenswrapper[4875]: I0130 17:40:35.756964 4875 generic.go:334] "Generic (PLEG): container finished" podID="2946bc6d-d7c1-4550-952b-6df7af9c86f7" containerID="59adfe2004808945e0fb96169286d801f72b1b6adbd0d53966fdac8adfa9b1da" exitCode=0 Jan 30 17:40:35 crc kubenswrapper[4875]: I0130 17:40:35.757046 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-556s6" event={"ID":"2946bc6d-d7c1-4550-952b-6df7af9c86f7","Type":"ContainerDied","Data":"59adfe2004808945e0fb96169286d801f72b1b6adbd0d53966fdac8adfa9b1da"} Jan 30 17:40:35 crc kubenswrapper[4875]: I0130 17:40:35.757100 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-556s6" event={"ID":"2946bc6d-d7c1-4550-952b-6df7af9c86f7","Type":"ContainerStarted","Data":"f1be590081792340ada6e199f1ddb76acf940aa8a7fe59b9d69fc480b4636c68"} Jan 30 17:40:35 crc kubenswrapper[4875]: I0130 17:40:35.759223 4875 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:40:37 crc kubenswrapper[4875]: I0130 17:40:37.776045 4875 generic.go:334] "Generic (PLEG): container finished" podID="2946bc6d-d7c1-4550-952b-6df7af9c86f7" containerID="2c1e610ddfa9d4d49c3af7a85ec894f9ebd13d62c46cb57751972df3a027757e" exitCode=0 Jan 30 17:40:37 crc kubenswrapper[4875]: I0130 17:40:37.776517 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-556s6" event={"ID":"2946bc6d-d7c1-4550-952b-6df7af9c86f7","Type":"ContainerDied","Data":"2c1e610ddfa9d4d49c3af7a85ec894f9ebd13d62c46cb57751972df3a027757e"} Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.582576 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mf9fm"] Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.585474 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.590885 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mf9fm"] Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.613342 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee80cf17-36cd-440c-be51-68e6db3720f6-utilities\") pod \"redhat-marketplace-mf9fm\" (UID: \"ee80cf17-36cd-440c-be51-68e6db3720f6\") " pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.613552 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee80cf17-36cd-440c-be51-68e6db3720f6-catalog-content\") pod \"redhat-marketplace-mf9fm\" (UID: \"ee80cf17-36cd-440c-be51-68e6db3720f6\") " pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.613704 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm58s\" (UniqueName: \"kubernetes.io/projected/ee80cf17-36cd-440c-be51-68e6db3720f6-kube-api-access-gm58s\") pod \"redhat-marketplace-mf9fm\" (UID: \"ee80cf17-36cd-440c-be51-68e6db3720f6\") " pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.714963 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee80cf17-36cd-440c-be51-68e6db3720f6-utilities\") pod \"redhat-marketplace-mf9fm\" (UID: \"ee80cf17-36cd-440c-be51-68e6db3720f6\") " pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.715093 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee80cf17-36cd-440c-be51-68e6db3720f6-catalog-content\") pod \"redhat-marketplace-mf9fm\" (UID: \"ee80cf17-36cd-440c-be51-68e6db3720f6\") " pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.715155 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm58s\" (UniqueName: \"kubernetes.io/projected/ee80cf17-36cd-440c-be51-68e6db3720f6-kube-api-access-gm58s\") pod \"redhat-marketplace-mf9fm\" (UID: \"ee80cf17-36cd-440c-be51-68e6db3720f6\") " pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.715693 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee80cf17-36cd-440c-be51-68e6db3720f6-utilities\") pod \"redhat-marketplace-mf9fm\" (UID: \"ee80cf17-36cd-440c-be51-68e6db3720f6\") " pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.715707 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee80cf17-36cd-440c-be51-68e6db3720f6-catalog-content\") pod \"redhat-marketplace-mf9fm\" (UID: \"ee80cf17-36cd-440c-be51-68e6db3720f6\") " pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.737298 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm58s\" (UniqueName: \"kubernetes.io/projected/ee80cf17-36cd-440c-be51-68e6db3720f6-kube-api-access-gm58s\") pod \"redhat-marketplace-mf9fm\" (UID: \"ee80cf17-36cd-440c-be51-68e6db3720f6\") " pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.785599 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-556s6" event={"ID":"2946bc6d-d7c1-4550-952b-6df7af9c86f7","Type":"ContainerStarted","Data":"504e82e991f49bb8981fc8e960300561e92bd5608cd30cedcba9fc796a13f68f"} Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.807217 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-556s6" podStartSLOduration=2.309604126 podStartE2EDuration="4.807193522s" podCreationTimestamp="2026-01-30 17:40:34 +0000 UTC" firstStartedPulling="2026-01-30 17:40:35.758942843 +0000 UTC m=+2646.306306216" lastFinishedPulling="2026-01-30 17:40:38.256532229 +0000 UTC m=+2648.803895612" observedRunningTime="2026-01-30 17:40:38.800620396 +0000 UTC m=+2649.347983799" watchObservedRunningTime="2026-01-30 17:40:38.807193522 +0000 UTC m=+2649.354556905" Jan 30 17:40:38 crc kubenswrapper[4875]: I0130 17:40:38.902996 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:39 crc kubenswrapper[4875]: I0130 17:40:39.187685 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mf9fm"] Jan 30 17:40:39 crc kubenswrapper[4875]: W0130 17:40:39.195028 4875 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee80cf17_36cd_440c_be51_68e6db3720f6.slice/crio-45cba7cf37e00827c1b7a6c4413d44303c21eaa2d54e1a7c242e8dde02bc7e91 WatchSource:0}: Error finding container 45cba7cf37e00827c1b7a6c4413d44303c21eaa2d54e1a7c242e8dde02bc7e91: Status 404 returned error can't find the container with id 45cba7cf37e00827c1b7a6c4413d44303c21eaa2d54e1a7c242e8dde02bc7e91 Jan 30 17:40:39 crc kubenswrapper[4875]: I0130 17:40:39.794262 4875 generic.go:334] "Generic (PLEG): container finished" podID="ee80cf17-36cd-440c-be51-68e6db3720f6" containerID="a7598504394d6c98349ef97212857bf385bde2b68a37b970aeba1ea2748be19d" exitCode=0 Jan 30 17:40:39 crc kubenswrapper[4875]: I0130 17:40:39.794319 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mf9fm" event={"ID":"ee80cf17-36cd-440c-be51-68e6db3720f6","Type":"ContainerDied","Data":"a7598504394d6c98349ef97212857bf385bde2b68a37b970aeba1ea2748be19d"} Jan 30 17:40:39 crc kubenswrapper[4875]: I0130 17:40:39.794369 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mf9fm" event={"ID":"ee80cf17-36cd-440c-be51-68e6db3720f6","Type":"ContainerStarted","Data":"45cba7cf37e00827c1b7a6c4413d44303c21eaa2d54e1a7c242e8dde02bc7e91"} Jan 30 17:40:41 crc kubenswrapper[4875]: I0130 17:40:41.819481 4875 generic.go:334] "Generic (PLEG): container finished" podID="ee80cf17-36cd-440c-be51-68e6db3720f6" containerID="12ed5b74f337000f321f38c260ded4bc7ec6d5404b064b29e29f3b2da1d52af2" exitCode=0 Jan 30 17:40:41 crc kubenswrapper[4875]: I0130 17:40:41.820425 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mf9fm" event={"ID":"ee80cf17-36cd-440c-be51-68e6db3720f6","Type":"ContainerDied","Data":"12ed5b74f337000f321f38c260ded4bc7ec6d5404b064b29e29f3b2da1d52af2"} Jan 30 17:40:42 crc kubenswrapper[4875]: I0130 17:40:42.829303 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mf9fm" event={"ID":"ee80cf17-36cd-440c-be51-68e6db3720f6","Type":"ContainerStarted","Data":"5d0ee6d47c7b167ea6f14b69f7a5b596a7ec40b5be6b0c8fe7e495acda1adedf"} Jan 30 17:40:42 crc kubenswrapper[4875]: I0130 17:40:42.860082 4875 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mf9fm" podStartSLOduration=2.447130535 podStartE2EDuration="4.860065765s" podCreationTimestamp="2026-01-30 17:40:38 +0000 UTC" firstStartedPulling="2026-01-30 17:40:39.796055992 +0000 UTC m=+2650.343419375" lastFinishedPulling="2026-01-30 17:40:42.208991212 +0000 UTC m=+2652.756354605" observedRunningTime="2026-01-30 17:40:42.855406149 +0000 UTC m=+2653.402769532" watchObservedRunningTime="2026-01-30 17:40:42.860065765 +0000 UTC m=+2653.407429148" Jan 30 17:40:44 crc kubenswrapper[4875]: I0130 17:40:44.922894 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:44 crc kubenswrapper[4875]: I0130 17:40:44.922937 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:44 crc kubenswrapper[4875]: I0130 17:40:44.962436 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:45 crc kubenswrapper[4875]: I0130 17:40:45.893430 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:46 crc kubenswrapper[4875]: I0130 17:40:46.173427 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-556s6"] Jan 30 17:40:47 crc kubenswrapper[4875]: I0130 17:40:47.864192 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-556s6" podUID="2946bc6d-d7c1-4550-952b-6df7af9c86f7" containerName="registry-server" containerID="cri-o://504e82e991f49bb8981fc8e960300561e92bd5608cd30cedcba9fc796a13f68f" gracePeriod=2 Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.805690 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.872907 4875 generic.go:334] "Generic (PLEG): container finished" podID="2946bc6d-d7c1-4550-952b-6df7af9c86f7" containerID="504e82e991f49bb8981fc8e960300561e92bd5608cd30cedcba9fc796a13f68f" exitCode=0 Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.872993 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-556s6" event={"ID":"2946bc6d-d7c1-4550-952b-6df7af9c86f7","Type":"ContainerDied","Data":"504e82e991f49bb8981fc8e960300561e92bd5608cd30cedcba9fc796a13f68f"} Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.873024 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-556s6" event={"ID":"2946bc6d-d7c1-4550-952b-6df7af9c86f7","Type":"ContainerDied","Data":"f1be590081792340ada6e199f1ddb76acf940aa8a7fe59b9d69fc480b4636c68"} Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.873046 4875 scope.go:117] "RemoveContainer" containerID="504e82e991f49bb8981fc8e960300561e92bd5608cd30cedcba9fc796a13f68f" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.873195 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-556s6" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.895626 4875 scope.go:117] "RemoveContainer" containerID="2c1e610ddfa9d4d49c3af7a85ec894f9ebd13d62c46cb57751972df3a027757e" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.908713 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.908856 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.914435 4875 scope.go:117] "RemoveContainer" containerID="59adfe2004808945e0fb96169286d801f72b1b6adbd0d53966fdac8adfa9b1da" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.957877 4875 scope.go:117] "RemoveContainer" containerID="504e82e991f49bb8981fc8e960300561e92bd5608cd30cedcba9fc796a13f68f" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.958389 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:48 crc kubenswrapper[4875]: E0130 17:40:48.958666 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"504e82e991f49bb8981fc8e960300561e92bd5608cd30cedcba9fc796a13f68f\": container with ID starting with 504e82e991f49bb8981fc8e960300561e92bd5608cd30cedcba9fc796a13f68f not found: ID does not exist" containerID="504e82e991f49bb8981fc8e960300561e92bd5608cd30cedcba9fc796a13f68f" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.958701 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504e82e991f49bb8981fc8e960300561e92bd5608cd30cedcba9fc796a13f68f"} err="failed to get container status \"504e82e991f49bb8981fc8e960300561e92bd5608cd30cedcba9fc796a13f68f\": rpc error: code = NotFound desc = could not find container \"504e82e991f49bb8981fc8e960300561e92bd5608cd30cedcba9fc796a13f68f\": container with ID starting with 504e82e991f49bb8981fc8e960300561e92bd5608cd30cedcba9fc796a13f68f not found: ID does not exist" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.958724 4875 scope.go:117] "RemoveContainer" containerID="2c1e610ddfa9d4d49c3af7a85ec894f9ebd13d62c46cb57751972df3a027757e" Jan 30 17:40:48 crc kubenswrapper[4875]: E0130 17:40:48.959031 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1e610ddfa9d4d49c3af7a85ec894f9ebd13d62c46cb57751972df3a027757e\": container with ID starting with 2c1e610ddfa9d4d49c3af7a85ec894f9ebd13d62c46cb57751972df3a027757e not found: ID does not exist" containerID="2c1e610ddfa9d4d49c3af7a85ec894f9ebd13d62c46cb57751972df3a027757e" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.959061 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1e610ddfa9d4d49c3af7a85ec894f9ebd13d62c46cb57751972df3a027757e"} err="failed to get container status \"2c1e610ddfa9d4d49c3af7a85ec894f9ebd13d62c46cb57751972df3a027757e\": rpc error: code = NotFound desc = could not find container \"2c1e610ddfa9d4d49c3af7a85ec894f9ebd13d62c46cb57751972df3a027757e\": container with ID starting with 2c1e610ddfa9d4d49c3af7a85ec894f9ebd13d62c46cb57751972df3a027757e not found: ID does not exist" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.959080 4875 scope.go:117] "RemoveContainer" containerID="59adfe2004808945e0fb96169286d801f72b1b6adbd0d53966fdac8adfa9b1da" Jan 30 17:40:48 crc kubenswrapper[4875]: E0130 17:40:48.959343 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59adfe2004808945e0fb96169286d801f72b1b6adbd0d53966fdac8adfa9b1da\": container with ID starting with 59adfe2004808945e0fb96169286d801f72b1b6adbd0d53966fdac8adfa9b1da not found: ID does not exist" containerID="59adfe2004808945e0fb96169286d801f72b1b6adbd0d53966fdac8adfa9b1da" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.959368 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59adfe2004808945e0fb96169286d801f72b1b6adbd0d53966fdac8adfa9b1da"} err="failed to get container status \"59adfe2004808945e0fb96169286d801f72b1b6adbd0d53966fdac8adfa9b1da\": rpc error: code = NotFound desc = could not find container \"59adfe2004808945e0fb96169286d801f72b1b6adbd0d53966fdac8adfa9b1da\": container with ID starting with 59adfe2004808945e0fb96169286d801f72b1b6adbd0d53966fdac8adfa9b1da not found: ID does not exist" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.986521 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2946bc6d-d7c1-4550-952b-6df7af9c86f7-catalog-content\") pod \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\" (UID: \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\") " Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.986612 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2946bc6d-d7c1-4550-952b-6df7af9c86f7-utilities\") pod \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\" (UID: \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\") " Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.986684 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdzbq\" (UniqueName: \"kubernetes.io/projected/2946bc6d-d7c1-4550-952b-6df7af9c86f7-kube-api-access-qdzbq\") pod \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\" (UID: \"2946bc6d-d7c1-4550-952b-6df7af9c86f7\") " Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.987751 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2946bc6d-d7c1-4550-952b-6df7af9c86f7-utilities" (OuterVolumeSpecName: "utilities") pod "2946bc6d-d7c1-4550-952b-6df7af9c86f7" (UID: "2946bc6d-d7c1-4550-952b-6df7af9c86f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:40:48 crc kubenswrapper[4875]: I0130 17:40:48.998160 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2946bc6d-d7c1-4550-952b-6df7af9c86f7-kube-api-access-qdzbq" (OuterVolumeSpecName: "kube-api-access-qdzbq") pod "2946bc6d-d7c1-4550-952b-6df7af9c86f7" (UID: "2946bc6d-d7c1-4550-952b-6df7af9c86f7"). InnerVolumeSpecName "kube-api-access-qdzbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:40:49 crc kubenswrapper[4875]: I0130 17:40:49.055024 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2946bc6d-d7c1-4550-952b-6df7af9c86f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2946bc6d-d7c1-4550-952b-6df7af9c86f7" (UID: "2946bc6d-d7c1-4550-952b-6df7af9c86f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:40:49 crc kubenswrapper[4875]: I0130 17:40:49.088899 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2946bc6d-d7c1-4550-952b-6df7af9c86f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:49 crc kubenswrapper[4875]: I0130 17:40:49.088947 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2946bc6d-d7c1-4550-952b-6df7af9c86f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:49 crc kubenswrapper[4875]: I0130 17:40:49.088962 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdzbq\" (UniqueName: \"kubernetes.io/projected/2946bc6d-d7c1-4550-952b-6df7af9c86f7-kube-api-access-qdzbq\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:49 crc kubenswrapper[4875]: I0130 17:40:49.212975 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-556s6"] Jan 30 17:40:49 crc kubenswrapper[4875]: I0130 17:40:49.218748 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-556s6"] Jan 30 17:40:49 crc kubenswrapper[4875]: I0130 17:40:49.960885 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:50 crc kubenswrapper[4875]: I0130 17:40:50.147180 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2946bc6d-d7c1-4550-952b-6df7af9c86f7" path="/var/lib/kubelet/pods/2946bc6d-d7c1-4550-952b-6df7af9c86f7/volumes" Jan 30 17:40:52 crc kubenswrapper[4875]: I0130 17:40:52.387730 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mf9fm"] Jan 30 17:40:52 crc kubenswrapper[4875]: I0130 17:40:52.904250 4875 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mf9fm" podUID="ee80cf17-36cd-440c-be51-68e6db3720f6" containerName="registry-server" containerID="cri-o://5d0ee6d47c7b167ea6f14b69f7a5b596a7ec40b5be6b0c8fe7e495acda1adedf" gracePeriod=2 Jan 30 17:40:53 crc kubenswrapper[4875]: I0130 17:40:53.359420 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:53 crc kubenswrapper[4875]: I0130 17:40:53.456641 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee80cf17-36cd-440c-be51-68e6db3720f6-catalog-content\") pod \"ee80cf17-36cd-440c-be51-68e6db3720f6\" (UID: \"ee80cf17-36cd-440c-be51-68e6db3720f6\") " Jan 30 17:40:53 crc kubenswrapper[4875]: I0130 17:40:53.456707 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee80cf17-36cd-440c-be51-68e6db3720f6-utilities\") pod \"ee80cf17-36cd-440c-be51-68e6db3720f6\" (UID: \"ee80cf17-36cd-440c-be51-68e6db3720f6\") " Jan 30 17:40:53 crc kubenswrapper[4875]: I0130 17:40:53.456850 4875 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm58s\" (UniqueName: \"kubernetes.io/projected/ee80cf17-36cd-440c-be51-68e6db3720f6-kube-api-access-gm58s\") pod \"ee80cf17-36cd-440c-be51-68e6db3720f6\" (UID: \"ee80cf17-36cd-440c-be51-68e6db3720f6\") " Jan 30 17:40:53 crc kubenswrapper[4875]: I0130 17:40:53.457547 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee80cf17-36cd-440c-be51-68e6db3720f6-utilities" (OuterVolumeSpecName: "utilities") pod "ee80cf17-36cd-440c-be51-68e6db3720f6" (UID: "ee80cf17-36cd-440c-be51-68e6db3720f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:40:53 crc kubenswrapper[4875]: I0130 17:40:53.462864 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee80cf17-36cd-440c-be51-68e6db3720f6-kube-api-access-gm58s" (OuterVolumeSpecName: "kube-api-access-gm58s") pod "ee80cf17-36cd-440c-be51-68e6db3720f6" (UID: "ee80cf17-36cd-440c-be51-68e6db3720f6"). InnerVolumeSpecName "kube-api-access-gm58s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:40:53 crc kubenswrapper[4875]: I0130 17:40:53.479054 4875 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee80cf17-36cd-440c-be51-68e6db3720f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee80cf17-36cd-440c-be51-68e6db3720f6" (UID: "ee80cf17-36cd-440c-be51-68e6db3720f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.491371 4875 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm58s\" (UniqueName: \"kubernetes.io/projected/ee80cf17-36cd-440c-be51-68e6db3720f6-kube-api-access-gm58s\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.491403 4875 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee80cf17-36cd-440c-be51-68e6db3720f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.491414 4875 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee80cf17-36cd-440c-be51-68e6db3720f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.517094 4875 generic.go:334] "Generic (PLEG): container finished" podID="ee80cf17-36cd-440c-be51-68e6db3720f6" containerID="5d0ee6d47c7b167ea6f14b69f7a5b596a7ec40b5be6b0c8fe7e495acda1adedf" exitCode=0 Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.517174 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mf9fm" event={"ID":"ee80cf17-36cd-440c-be51-68e6db3720f6","Type":"ContainerDied","Data":"5d0ee6d47c7b167ea6f14b69f7a5b596a7ec40b5be6b0c8fe7e495acda1adedf"} Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.517302 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mf9fm" event={"ID":"ee80cf17-36cd-440c-be51-68e6db3720f6","Type":"ContainerDied","Data":"45cba7cf37e00827c1b7a6c4413d44303c21eaa2d54e1a7c242e8dde02bc7e91"} Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.517341 4875 scope.go:117] "RemoveContainer" containerID="5d0ee6d47c7b167ea6f14b69f7a5b596a7ec40b5be6b0c8fe7e495acda1adedf" Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.517647 4875 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mf9fm" Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.547757 4875 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mf9fm"] Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.548202 4875 scope.go:117] "RemoveContainer" containerID="12ed5b74f337000f321f38c260ded4bc7ec6d5404b064b29e29f3b2da1d52af2" Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.555788 4875 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mf9fm"] Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.580787 4875 scope.go:117] "RemoveContainer" containerID="a7598504394d6c98349ef97212857bf385bde2b68a37b970aeba1ea2748be19d" Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.615819 4875 scope.go:117] "RemoveContainer" containerID="5d0ee6d47c7b167ea6f14b69f7a5b596a7ec40b5be6b0c8fe7e495acda1adedf" Jan 30 17:40:54 crc kubenswrapper[4875]: E0130 17:40:54.616273 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d0ee6d47c7b167ea6f14b69f7a5b596a7ec40b5be6b0c8fe7e495acda1adedf\": container with ID starting with 5d0ee6d47c7b167ea6f14b69f7a5b596a7ec40b5be6b0c8fe7e495acda1adedf not found: ID does not exist" containerID="5d0ee6d47c7b167ea6f14b69f7a5b596a7ec40b5be6b0c8fe7e495acda1adedf" Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.616338 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d0ee6d47c7b167ea6f14b69f7a5b596a7ec40b5be6b0c8fe7e495acda1adedf"} err="failed to get container status \"5d0ee6d47c7b167ea6f14b69f7a5b596a7ec40b5be6b0c8fe7e495acda1adedf\": rpc error: code = NotFound desc = could not find container \"5d0ee6d47c7b167ea6f14b69f7a5b596a7ec40b5be6b0c8fe7e495acda1adedf\": container with ID starting with 5d0ee6d47c7b167ea6f14b69f7a5b596a7ec40b5be6b0c8fe7e495acda1adedf not found: ID does not exist" Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.616397 4875 scope.go:117] "RemoveContainer" containerID="12ed5b74f337000f321f38c260ded4bc7ec6d5404b064b29e29f3b2da1d52af2" Jan 30 17:40:54 crc kubenswrapper[4875]: E0130 17:40:54.616921 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12ed5b74f337000f321f38c260ded4bc7ec6d5404b064b29e29f3b2da1d52af2\": container with ID starting with 12ed5b74f337000f321f38c260ded4bc7ec6d5404b064b29e29f3b2da1d52af2 not found: ID does not exist" containerID="12ed5b74f337000f321f38c260ded4bc7ec6d5404b064b29e29f3b2da1d52af2" Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.617003 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12ed5b74f337000f321f38c260ded4bc7ec6d5404b064b29e29f3b2da1d52af2"} err="failed to get container status \"12ed5b74f337000f321f38c260ded4bc7ec6d5404b064b29e29f3b2da1d52af2\": rpc error: code = NotFound desc = could not find container \"12ed5b74f337000f321f38c260ded4bc7ec6d5404b064b29e29f3b2da1d52af2\": container with ID starting with 12ed5b74f337000f321f38c260ded4bc7ec6d5404b064b29e29f3b2da1d52af2 not found: ID does not exist" Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.617060 4875 scope.go:117] "RemoveContainer" containerID="a7598504394d6c98349ef97212857bf385bde2b68a37b970aeba1ea2748be19d" Jan 30 17:40:54 crc kubenswrapper[4875]: E0130 17:40:54.617406 4875 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7598504394d6c98349ef97212857bf385bde2b68a37b970aeba1ea2748be19d\": container with ID starting with a7598504394d6c98349ef97212857bf385bde2b68a37b970aeba1ea2748be19d not found: ID does not exist" containerID="a7598504394d6c98349ef97212857bf385bde2b68a37b970aeba1ea2748be19d" Jan 30 17:40:54 crc kubenswrapper[4875]: I0130 17:40:54.617455 4875 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7598504394d6c98349ef97212857bf385bde2b68a37b970aeba1ea2748be19d"} err="failed to get container status \"a7598504394d6c98349ef97212857bf385bde2b68a37b970aeba1ea2748be19d\": rpc error: code = NotFound desc = could not find container \"a7598504394d6c98349ef97212857bf385bde2b68a37b970aeba1ea2748be19d\": container with ID starting with a7598504394d6c98349ef97212857bf385bde2b68a37b970aeba1ea2748be19d not found: ID does not exist" Jan 30 17:40:56 crc kubenswrapper[4875]: I0130 17:40:56.146313 4875 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee80cf17-36cd-440c-be51-68e6db3720f6" path="/var/lib/kubelet/pods/ee80cf17-36cd-440c-be51-68e6db3720f6/volumes" Jan 30 17:40:56 crc kubenswrapper[4875]: I0130 17:40:56.288086 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:40:56 crc kubenswrapper[4875]: I0130 17:40:56.288151 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:41:10 crc kubenswrapper[4875]: I0130 17:41:10.886111 4875 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p4rgx"] Jan 30 17:41:10 crc kubenswrapper[4875]: E0130 17:41:10.886978 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee80cf17-36cd-440c-be51-68e6db3720f6" containerName="registry-server" Jan 30 17:41:10 crc kubenswrapper[4875]: I0130 17:41:10.886995 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee80cf17-36cd-440c-be51-68e6db3720f6" containerName="registry-server" Jan 30 17:41:10 crc kubenswrapper[4875]: E0130 17:41:10.887018 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee80cf17-36cd-440c-be51-68e6db3720f6" containerName="extract-utilities" Jan 30 17:41:10 crc kubenswrapper[4875]: I0130 17:41:10.887026 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee80cf17-36cd-440c-be51-68e6db3720f6" containerName="extract-utilities" Jan 30 17:41:10 crc kubenswrapper[4875]: E0130 17:41:10.887039 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2946bc6d-d7c1-4550-952b-6df7af9c86f7" containerName="extract-utilities" Jan 30 17:41:10 crc kubenswrapper[4875]: I0130 17:41:10.887050 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="2946bc6d-d7c1-4550-952b-6df7af9c86f7" containerName="extract-utilities" Jan 30 17:41:10 crc kubenswrapper[4875]: E0130 17:41:10.887059 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee80cf17-36cd-440c-be51-68e6db3720f6" containerName="extract-content" Jan 30 17:41:10 crc kubenswrapper[4875]: I0130 17:41:10.887066 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee80cf17-36cd-440c-be51-68e6db3720f6" containerName="extract-content" Jan 30 17:41:10 crc kubenswrapper[4875]: E0130 17:41:10.887084 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2946bc6d-d7c1-4550-952b-6df7af9c86f7" containerName="registry-server" Jan 30 17:41:10 crc kubenswrapper[4875]: I0130 17:41:10.887091 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="2946bc6d-d7c1-4550-952b-6df7af9c86f7" containerName="registry-server" Jan 30 17:41:10 crc kubenswrapper[4875]: E0130 17:41:10.887104 4875 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2946bc6d-d7c1-4550-952b-6df7af9c86f7" containerName="extract-content" Jan 30 17:41:10 crc kubenswrapper[4875]: I0130 17:41:10.887112 4875 state_mem.go:107] "Deleted CPUSet assignment" podUID="2946bc6d-d7c1-4550-952b-6df7af9c86f7" containerName="extract-content" Jan 30 17:41:10 crc kubenswrapper[4875]: I0130 17:41:10.887291 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="2946bc6d-d7c1-4550-952b-6df7af9c86f7" containerName="registry-server" Jan 30 17:41:10 crc kubenswrapper[4875]: I0130 17:41:10.887308 4875 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee80cf17-36cd-440c-be51-68e6db3720f6" containerName="registry-server" Jan 30 17:41:10 crc kubenswrapper[4875]: I0130 17:41:10.888690 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4rgx" Jan 30 17:41:10 crc kubenswrapper[4875]: I0130 17:41:10.898645 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4rgx"] Jan 30 17:41:11 crc kubenswrapper[4875]: I0130 17:41:11.037087 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/634693c5-bf36-44ff-9638-379b7e7d31e5-catalog-content\") pod \"redhat-operators-p4rgx\" (UID: \"634693c5-bf36-44ff-9638-379b7e7d31e5\") " pod="openshift-marketplace/redhat-operators-p4rgx" Jan 30 17:41:11 crc kubenswrapper[4875]: I0130 17:41:11.037161 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/634693c5-bf36-44ff-9638-379b7e7d31e5-utilities\") pod \"redhat-operators-p4rgx\" (UID: \"634693c5-bf36-44ff-9638-379b7e7d31e5\") " pod="openshift-marketplace/redhat-operators-p4rgx" Jan 30 17:41:11 crc kubenswrapper[4875]: I0130 17:41:11.037229 4875 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49rck\" (UniqueName: \"kubernetes.io/projected/634693c5-bf36-44ff-9638-379b7e7d31e5-kube-api-access-49rck\") pod \"redhat-operators-p4rgx\" (UID: \"634693c5-bf36-44ff-9638-379b7e7d31e5\") " pod="openshift-marketplace/redhat-operators-p4rgx" Jan 30 17:41:11 crc kubenswrapper[4875]: I0130 17:41:11.139101 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/634693c5-bf36-44ff-9638-379b7e7d31e5-catalog-content\") pod \"redhat-operators-p4rgx\" (UID: \"634693c5-bf36-44ff-9638-379b7e7d31e5\") " pod="openshift-marketplace/redhat-operators-p4rgx" Jan 30 17:41:11 crc kubenswrapper[4875]: I0130 17:41:11.139180 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/634693c5-bf36-44ff-9638-379b7e7d31e5-utilities\") pod \"redhat-operators-p4rgx\" (UID: \"634693c5-bf36-44ff-9638-379b7e7d31e5\") " pod="openshift-marketplace/redhat-operators-p4rgx" Jan 30 17:41:11 crc kubenswrapper[4875]: I0130 17:41:11.139225 4875 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49rck\" (UniqueName: \"kubernetes.io/projected/634693c5-bf36-44ff-9638-379b7e7d31e5-kube-api-access-49rck\") pod \"redhat-operators-p4rgx\" (UID: \"634693c5-bf36-44ff-9638-379b7e7d31e5\") " pod="openshift-marketplace/redhat-operators-p4rgx" Jan 30 17:41:11 crc kubenswrapper[4875]: I0130 17:41:11.140004 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/634693c5-bf36-44ff-9638-379b7e7d31e5-catalog-content\") pod \"redhat-operators-p4rgx\" (UID: \"634693c5-bf36-44ff-9638-379b7e7d31e5\") " pod="openshift-marketplace/redhat-operators-p4rgx" Jan 30 17:41:11 crc kubenswrapper[4875]: I0130 17:41:11.140040 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/634693c5-bf36-44ff-9638-379b7e7d31e5-utilities\") pod \"redhat-operators-p4rgx\" (UID: \"634693c5-bf36-44ff-9638-379b7e7d31e5\") " pod="openshift-marketplace/redhat-operators-p4rgx" Jan 30 17:41:11 crc kubenswrapper[4875]: I0130 17:41:11.161849 4875 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49rck\" (UniqueName: \"kubernetes.io/projected/634693c5-bf36-44ff-9638-379b7e7d31e5-kube-api-access-49rck\") pod \"redhat-operators-p4rgx\" (UID: \"634693c5-bf36-44ff-9638-379b7e7d31e5\") " pod="openshift-marketplace/redhat-operators-p4rgx" Jan 30 17:41:11 crc kubenswrapper[4875]: I0130 17:41:11.212304 4875 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p4rgx" Jan 30 17:41:11 crc kubenswrapper[4875]: I0130 17:41:11.686268 4875 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p4rgx"] Jan 30 17:41:12 crc kubenswrapper[4875]: I0130 17:41:12.657460 4875 generic.go:334] "Generic (PLEG): container finished" podID="634693c5-bf36-44ff-9638-379b7e7d31e5" containerID="dd52ff6a818df71fbc8bb3ea96a9fd8bba452c7d70a5e7c9d68e8d8d22e3f16d" exitCode=0 Jan 30 17:41:12 crc kubenswrapper[4875]: I0130 17:41:12.657843 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4rgx" event={"ID":"634693c5-bf36-44ff-9638-379b7e7d31e5","Type":"ContainerDied","Data":"dd52ff6a818df71fbc8bb3ea96a9fd8bba452c7d70a5e7c9d68e8d8d22e3f16d"} Jan 30 17:41:12 crc kubenswrapper[4875]: I0130 17:41:12.657869 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4rgx" event={"ID":"634693c5-bf36-44ff-9638-379b7e7d31e5","Type":"ContainerStarted","Data":"e9cfae006e9ceac3ca9fe4f001c488cc43f78a8e223c98cde1edf7951a9e2c79"} Jan 30 17:41:13 crc kubenswrapper[4875]: I0130 17:41:13.666497 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4rgx" event={"ID":"634693c5-bf36-44ff-9638-379b7e7d31e5","Type":"ContainerStarted","Data":"c0369c732977126689efd0c541192a2ea431561fe1aaaa930c5798de1b52e39d"} Jan 30 17:41:14 crc kubenswrapper[4875]: I0130 17:41:14.682186 4875 generic.go:334] "Generic (PLEG): container finished" podID="634693c5-bf36-44ff-9638-379b7e7d31e5" containerID="c0369c732977126689efd0c541192a2ea431561fe1aaaa930c5798de1b52e39d" exitCode=0 Jan 30 17:41:14 crc kubenswrapper[4875]: I0130 17:41:14.682235 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4rgx" event={"ID":"634693c5-bf36-44ff-9638-379b7e7d31e5","Type":"ContainerDied","Data":"c0369c732977126689efd0c541192a2ea431561fe1aaaa930c5798de1b52e39d"} Jan 30 17:41:16 crc kubenswrapper[4875]: I0130 17:41:16.699860 4875 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p4rgx" event={"ID":"634693c5-bf36-44ff-9638-379b7e7d31e5","Type":"ContainerStarted","Data":"32af9edcd86091633d24526aac9b60638d166f290f97129c0b5af3f126fd69e0"} Jan 30 17:41:21 crc kubenswrapper[4875]: I0130 17:41:21.213532 4875 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p4rgx" Jan 30 17:41:21 crc kubenswrapper[4875]: I0130 17:41:21.214104 4875 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p4rgx" Jan 30 17:41:22 crc kubenswrapper[4875]: I0130 17:41:22.254449 4875 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p4rgx" podUID="634693c5-bf36-44ff-9638-379b7e7d31e5" containerName="registry-server" probeResult="failure" output=< Jan 30 17:41:22 crc kubenswrapper[4875]: timeout: failed to connect service ":50051" within 1s Jan 30 17:41:22 crc kubenswrapper[4875]: > Jan 30 17:41:26 crc kubenswrapper[4875]: I0130 17:41:26.287206 4875 patch_prober.go:28] interesting pod/machine-config-daemon-9wgsn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:41:26 crc kubenswrapper[4875]: I0130 17:41:26.287812 4875 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wgsn" podUID="9cfabc70-3a7a-4fdb-bd21-f2648c9eabb8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"